Test Report: Docker_Linux_crio_arm64 21773

                    
                      8990789ccd20605bfce25419a1a009c7a75246f6:2025-10-20:41995
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.7
35 TestAddons/parallel/Registry 15.95
36 TestAddons/parallel/RegistryCreds 0.55
37 TestAddons/parallel/Ingress 146.09
38 TestAddons/parallel/InspektorGadget 6.31
39 TestAddons/parallel/MetricsServer 6.38
41 TestAddons/parallel/CSI 38.32
42 TestAddons/parallel/Headlamp 3.14
43 TestAddons/parallel/CloudSpanner 6.28
44 TestAddons/parallel/LocalPath 8.46
45 TestAddons/parallel/NvidiaDevicePlugin 6.28
46 TestAddons/parallel/Yakd 6.29
98 TestFunctional/parallel/ServiceCmdConnect 603.52
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.93
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.67
136 TestFunctional/parallel/ServiceCmd/Format 0.55
137 TestFunctional/parallel/ServiceCmd/URL 0.41
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.32
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.87
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
191 TestJSONOutput/pause/Command 2.31
197 TestJSONOutput/unpause/Command 1.68
271 TestPause/serial/Pause 8.08
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.47
303 TestStartStop/group/old-k8s-version/serial/Pause 6.31
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.86
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.21
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.37
327 TestStartStop/group/embed-certs/serial/Pause 7.83
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.39
338 TestStartStop/group/newest-cni/serial/Pause 6.07
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.12
350 TestStartStop/group/no-preload/serial/Pause 8.06
x
+
TestAddons/serial/Volcano (0.7s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable volcano --alsologtostderr -v=1: exit status 11 (700.465539ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:19:55.220293  304978 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:19:55.221363  304978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:19:55.221514  304978 out.go:374] Setting ErrFile to fd 2...
	I1020 12:19:55.221521  304978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:19:55.221794  304978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:19:55.222161  304978 mustload.go:65] Loading cluster: addons-399470
	I1020 12:19:55.222606  304978 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:19:55.222628  304978 addons.go:606] checking whether the cluster is paused
	I1020 12:19:55.222735  304978 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:19:55.222758  304978 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:19:55.223189  304978 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:19:55.256702  304978 ssh_runner.go:195] Run: systemctl --version
	I1020 12:19:55.256763  304978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:19:55.274787  304978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:19:55.382758  304978 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:19:55.382853  304978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:19:55.417897  304978 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:19:55.417918  304978 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:19:55.417923  304978 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:19:55.417926  304978 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:19:55.417930  304978 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:19:55.417935  304978 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:19:55.417938  304978 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:19:55.417941  304978 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:19:55.417945  304978 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:19:55.417952  304978 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:19:55.417955  304978 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:19:55.417958  304978 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:19:55.417962  304978 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:19:55.417965  304978 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:19:55.417968  304978 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:19:55.417975  304978 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:19:55.417979  304978 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:19:55.417983  304978 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:19:55.417986  304978 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:19:55.417989  304978 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:19:55.417994  304978 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:19:55.417997  304978 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:19:55.418000  304978 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:19:55.418002  304978 cri.go:89] found id: ""
	I1020 12:19:55.418059  304978 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:19:55.433443  304978 out.go:203] 
	W1020 12:19:55.436332  304978 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:19:55.436357  304978 out.go:285] * 
	* 
	W1020 12:19:55.831684  304978 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:19:55.834867  304978 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.70s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 12.837692ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003756559s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002977883s
addons_test.go:392: (dbg) Run:  kubectl --context addons-399470 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-399470 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-399470 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.336234766s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 ip
2025/10/20 12:20:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable registry --alsologtostderr -v=1: exit status 11 (292.052981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:22.815253  305938 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:22.816084  305938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:22.816124  305938 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:22.816150  305938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:22.816541  305938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:22.816966  305938 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:22.817761  305938 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:22.817812  305938 addons.go:606] checking whether the cluster is paused
	I1020 12:20:22.817977  305938 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:22.818020  305938 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:22.818578  305938 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:22.841362  305938 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:22.841422  305938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:22.864991  305938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:22.975588  305938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:22.975694  305938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:23.006751  305938 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:23.006776  305938 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:23.006781  305938 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:23.006785  305938 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:23.006789  305938 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:23.006793  305938 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:23.006796  305938 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:23.006800  305938 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:23.006819  305938 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:23.006834  305938 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:23.006838  305938 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:23.006842  305938 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:23.006845  305938 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:23.006848  305938 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:23.006852  305938 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:23.006887  305938 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:23.006899  305938 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:23.006905  305938 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:23.006908  305938 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:23.006924  305938 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:23.006931  305938 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:23.006935  305938 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:23.006937  305938 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:23.006940  305938 cri.go:89] found id: ""
	I1020 12:20:23.007013  305938 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:23.024676  305938 out.go:203] 
	W1020 12:20:23.029651  305938 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:23.029689  305938 out.go:285] * 
	* 
	W1020 12:20:23.036633  305938 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:23.041420  305938 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.95s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.771084ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-399470
addons_test.go:332: (dbg) Run:  kubectl --context addons-399470 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (266.615901ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:48.892468  306978 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:48.893256  306978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:48.893294  306978 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:48.893326  306978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:48.893623  306978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:48.893982  306978 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:48.894413  306978 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:48.894458  306978 addons.go:606] checking whether the cluster is paused
	I1020 12:20:48.894597  306978 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:48.894641  306978 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:48.895116  306978 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:48.914483  306978 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:48.914542  306978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:48.933190  306978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:49.039230  306978 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:49.039327  306978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:49.074668  306978 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:49.074691  306978 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:49.074696  306978 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:49.074700  306978 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:49.074703  306978 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:49.074707  306978 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:49.074710  306978 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:49.074713  306978 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:49.074716  306978 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:49.074722  306978 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:49.074726  306978 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:49.074731  306978 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:49.074737  306978 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:49.074741  306978 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:49.074745  306978 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:49.074750  306978 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:49.074757  306978 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:49.074761  306978 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:49.074764  306978 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:49.074767  306978 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:49.074772  306978 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:49.074775  306978 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:49.074778  306978 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:49.074781  306978 cri.go:89] found id: ""
	I1020 12:20:49.074838  306978 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:49.090813  306978 out.go:203] 
	W1020 12:20:49.093773  306978 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:49.093815  306978 out.go:285] * 
	* 
	W1020 12:20:49.100414  306978 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:49.103359  306978 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-399470 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-399470 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-399470 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [adc0d378-f824-4b75-91f0-55b83a0f7ab0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [adc0d378-f824-4b75-91f0-55b83a0f7ab0] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003826632s
I1020 12:20:46.479640  298259 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.018956583s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-399470 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-399470
helpers_test.go:243: (dbg) docker inspect addons-399470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e",
	        "Created": "2025-10-20T12:17:30.309681277Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:17:30.379063183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/hosts",
	        "LogPath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e-json.log",
	        "Name": "/addons-399470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-399470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-399470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e",
	                "LowerDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-399470",
	                "Source": "/var/lib/docker/volumes/addons-399470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-399470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-399470",
	                "name.minikube.sigs.k8s.io": "addons-399470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64501902fdd201d4a8cb029fd7ca931da996468a10fd70cf66a0e3976149cd7a",
	            "SandboxKey": "/var/run/docker/netns/64501902fdd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-399470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:5f:2e:27:19:be",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ce96cfa0d925123f13aa5a319160f0921a5320860cfb9b4d9bc94640f9e40690",
	                    "EndpointID": "660f76ea179077c81f697755930b43000ebe63e4022ac0c4e06f324bd74e4900",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-399470",
	                        "feca8d58fd70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-399470 -n addons-399470
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-399470 logs -n 25: (1.729893146s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-415037                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-415037 │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ start   │ --download-only -p binary-mirror-776162 --alsologtostderr --binary-mirror http://127.0.0.1:35451 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-776162   │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │                     │
	│ delete  │ -p binary-mirror-776162                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-776162   │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p addons-399470                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │                     │
	│ addons  │ disable dashboard -p addons-399470                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │                     │
	│ start   │ -p addons-399470 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:19 UTC │
	│ addons  │ addons-399470 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-399470 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ enable headlamp -p addons-399470 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ addons-399470 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ ip      │ addons-399470 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │ 20 Oct 25 12:20 UTC │
	│ addons  │ addons-399470 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ addons-399470 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ addons-399470 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ ssh     │ addons-399470 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ addons-399470 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ addons-399470 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-399470                                                                                                                                                                                                                                                                                                                                                                                           │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │ 20 Oct 25 12:20 UTC │
	│ addons  │ addons-399470 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ addons-399470 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ addons-399470 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:21 UTC │                     │
	│ ssh     │ addons-399470 ssh cat /opt/local-path-provisioner/pvc-806b0eb2-ecde-4da7-8807-a7df9f295882_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:21 UTC │ 20 Oct 25 12:21 UTC │
	│ addons  │ addons-399470 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:21 UTC │                     │
	│ addons  │ addons-399470 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:21 UTC │                     │
	│ ip      │ addons-399470 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:22 UTC │ 20 Oct 25 12:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:17:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:17:03.166956  299029 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:17:03.167079  299029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:17:03.167091  299029 out.go:374] Setting ErrFile to fd 2...
	I1020 12:17:03.167096  299029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:17:03.167373  299029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:17:03.167844  299029 out.go:368] Setting JSON to false
	I1020 12:17:03.168730  299029 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7174,"bootTime":1760955450,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 12:17:03.168807  299029 start.go:141] virtualization:  
	I1020 12:17:03.172171  299029 out.go:179] * [addons-399470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 12:17:03.175964  299029 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:17:03.176078  299029 notify.go:220] Checking for updates...
	I1020 12:17:03.181824  299029 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:17:03.184978  299029 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:17:03.187988  299029 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 12:17:03.190989  299029 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 12:17:03.193974  299029 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:17:03.197190  299029 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:17:03.230152  299029 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 12:17:03.230287  299029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:17:03.304607  299029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-20 12:17:03.287974184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:17:03.304720  299029 docker.go:318] overlay module found
	I1020 12:17:03.307882  299029 out.go:179] * Using the docker driver based on user configuration
	I1020 12:17:03.310749  299029 start.go:305] selected driver: docker
	I1020 12:17:03.310780  299029 start.go:925] validating driver "docker" against <nil>
	I1020 12:17:03.310796  299029 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:17:03.311553  299029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:17:03.377839  299029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-20 12:17:03.368779365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:17:03.378005  299029 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:17:03.378248  299029 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:17:03.381234  299029 out.go:179] * Using Docker driver with root privileges
	I1020 12:17:03.384054  299029 cni.go:84] Creating CNI manager for ""
	I1020 12:17:03.384133  299029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:17:03.384148  299029 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:17:03.384243  299029 start.go:349] cluster config:
	{Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1020 12:17:03.389439  299029 out.go:179] * Starting "addons-399470" primary control-plane node in "addons-399470" cluster
	I1020 12:17:03.392249  299029 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:17:03.395285  299029 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:17:03.398213  299029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:17:03.398293  299029 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 12:17:03.398334  299029 cache.go:58] Caching tarball of preloaded images
	I1020 12:17:03.398338  299029 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:17:03.398459  299029 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 12:17:03.398472  299029 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:17:03.398845  299029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/config.json ...
	I1020 12:17:03.398884  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/config.json: {Name:mk5fa3974ca8c54458c0ea6e39b79eac041c96b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:03.415327  299029 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 12:17:03.415469  299029 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1020 12:17:03.415503  299029 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1020 12:17:03.415513  299029 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1020 12:17:03.415524  299029 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1020 12:17:03.415529  299029 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1020 12:17:21.380747  299029 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1020 12:17:21.380786  299029 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:17:21.380829  299029 start.go:360] acquireMachinesLock for addons-399470: {Name:mk012d6cf29d0e9498230bc3f730a78d550291e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:17:21.380957  299029 start.go:364] duration metric: took 109.983µs to acquireMachinesLock for "addons-399470"
	I1020 12:17:21.380985  299029 start.go:93] Provisioning new machine with config: &{Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:17:21.381061  299029 start.go:125] createHost starting for "" (driver="docker")
	I1020 12:17:21.384445  299029 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1020 12:17:21.384677  299029 start.go:159] libmachine.API.Create for "addons-399470" (driver="docker")
	I1020 12:17:21.384729  299029 client.go:168] LocalClient.Create starting
	I1020 12:17:21.384842  299029 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 12:17:22.195478  299029 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 12:17:23.622043  299029 cli_runner.go:164] Run: docker network inspect addons-399470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:17:23.636749  299029 cli_runner.go:211] docker network inspect addons-399470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:17:23.636843  299029 network_create.go:284] running [docker network inspect addons-399470] to gather additional debugging logs...
	I1020 12:17:23.636870  299029 cli_runner.go:164] Run: docker network inspect addons-399470
	W1020 12:17:23.652590  299029 cli_runner.go:211] docker network inspect addons-399470 returned with exit code 1
	I1020 12:17:23.652625  299029 network_create.go:287] error running [docker network inspect addons-399470]: docker network inspect addons-399470: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-399470 not found
	I1020 12:17:23.652640  299029 network_create.go:289] output of [docker network inspect addons-399470]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-399470 not found
	
	** /stderr **
	I1020 12:17:23.652758  299029 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:17:23.669025  299029 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a24550}
	I1020 12:17:23.669069  299029 network_create.go:124] attempt to create docker network addons-399470 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1020 12:17:23.669131  299029 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-399470 addons-399470
	I1020 12:17:23.724793  299029 network_create.go:108] docker network addons-399470 192.168.49.0/24 created
	I1020 12:17:23.724827  299029 kic.go:121] calculated static IP "192.168.49.2" for the "addons-399470" container
	I1020 12:17:23.724916  299029 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:17:23.740653  299029 cli_runner.go:164] Run: docker volume create addons-399470 --label name.minikube.sigs.k8s.io=addons-399470 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:17:23.757399  299029 oci.go:103] Successfully created a docker volume addons-399470
	I1020 12:17:23.757506  299029 cli_runner.go:164] Run: docker run --rm --name addons-399470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399470 --entrypoint /usr/bin/test -v addons-399470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:17:25.831159  299029 cli_runner.go:217] Completed: docker run --rm --name addons-399470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399470 --entrypoint /usr/bin/test -v addons-399470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.073596971s)
	I1020 12:17:25.831194  299029 oci.go:107] Successfully prepared a docker volume addons-399470
	I1020 12:17:25.831220  299029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:17:25.831238  299029 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:17:25.831305  299029 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-399470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:17:30.235995  299029 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-399470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.404636943s)
	I1020 12:17:30.236035  299029 kic.go:203] duration metric: took 4.404789954s to extract preloaded images to volume ...
	W1020 12:17:30.236182  299029 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 12:17:30.236296  299029 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:17:30.295422  299029 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-399470 --name addons-399470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-399470 --network addons-399470 --ip 192.168.49.2 --volume addons-399470:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:17:30.573434  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Running}}
	I1020 12:17:30.600030  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:17:30.619844  299029 cli_runner.go:164] Run: docker exec addons-399470 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:17:30.672162  299029 oci.go:144] the created container "addons-399470" has a running status.
	I1020 12:17:30.672193  299029 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa...
	I1020 12:17:31.979368  299029 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:17:32.014554  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:17:32.031318  299029 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:17:32.031353  299029 kic_runner.go:114] Args: [docker exec --privileged addons-399470 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:17:32.074866  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:17:32.092942  299029 machine.go:93] provisionDockerMachine start ...
	I1020 12:17:32.093047  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.110353  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:32.110680  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:32.110695  299029 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:17:32.259922  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-399470
	
	I1020 12:17:32.259949  299029 ubuntu.go:182] provisioning hostname "addons-399470"
	I1020 12:17:32.260013  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.277534  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:32.277837  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:32.277853  299029 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-399470 && echo "addons-399470" | sudo tee /etc/hostname
	I1020 12:17:32.433240  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-399470
	
	I1020 12:17:32.433322  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.450483  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:32.450793  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:32.450813  299029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-399470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-399470/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-399470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:17:32.596399  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:17:32.596423  299029 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 12:17:32.596459  299029 ubuntu.go:190] setting up certificates
	I1020 12:17:32.596469  299029 provision.go:84] configureAuth start
	I1020 12:17:32.596526  299029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399470
	I1020 12:17:32.612783  299029 provision.go:143] copyHostCerts
	I1020 12:17:32.612872  299029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 12:17:32.613001  299029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 12:17:32.613066  299029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 12:17:32.613125  299029 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.addons-399470 san=[127.0.0.1 192.168.49.2 addons-399470 localhost minikube]
	I1020 12:17:32.886848  299029 provision.go:177] copyRemoteCerts
	I1020 12:17:32.886916  299029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:17:32.886988  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.903037  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.022250  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 12:17:33.040073  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 12:17:33.057449  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:17:33.075131  299029 provision.go:87] duration metric: took 478.638155ms to configureAuth
	I1020 12:17:33.075161  299029 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:17:33.075354  299029 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:17:33.075459  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.092243  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:33.092612  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:33.092638  299029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:17:33.349846  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:17:33.349869  299029 machine.go:96] duration metric: took 1.256903785s to provisionDockerMachine
	I1020 12:17:33.349880  299029 client.go:171] duration metric: took 11.965140982s to LocalClient.Create
	I1020 12:17:33.349923  299029 start.go:167] duration metric: took 11.965244129s to libmachine.API.Create "addons-399470"
	I1020 12:17:33.349941  299029 start.go:293] postStartSetup for "addons-399470" (driver="docker")
	I1020 12:17:33.349953  299029 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:17:33.350043  299029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:17:33.350108  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.366577  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.473133  299029 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:17:33.476346  299029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:17:33.476390  299029 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:17:33.476402  299029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 12:17:33.476472  299029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 12:17:33.476501  299029 start.go:296] duration metric: took 126.551017ms for postStartSetup
	I1020 12:17:33.476822  299029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399470
	I1020 12:17:33.493094  299029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/config.json ...
	I1020 12:17:33.493398  299029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:17:33.493446  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.510056  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.609373  299029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:17:33.613978  299029 start.go:128] duration metric: took 12.232902505s to createHost
	I1020 12:17:33.614006  299029 start.go:83] releasing machines lock for "addons-399470", held for 12.233038842s
	I1020 12:17:33.614077  299029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399470
	I1020 12:17:33.631448  299029 ssh_runner.go:195] Run: cat /version.json
	I1020 12:17:33.631510  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.631804  299029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:17:33.631874  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.653228  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.664148  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.756145  299029 ssh_runner.go:195] Run: systemctl --version
	I1020 12:17:33.848889  299029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:17:33.885627  299029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:17:33.889969  299029 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:17:33.890041  299029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:17:33.918264  299029 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 12:17:33.918341  299029 start.go:495] detecting cgroup driver to use...
	I1020 12:17:33.918411  299029 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 12:17:33.918487  299029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:17:33.934704  299029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:17:33.947217  299029 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:17:33.947283  299029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:17:33.964211  299029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:17:33.982445  299029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:17:34.099460  299029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:17:34.213310  299029 docker.go:234] disabling docker service ...
	I1020 12:17:34.213420  299029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:17:34.234090  299029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:17:34.247513  299029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:17:34.360786  299029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:17:34.476863  299029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:17:34.489355  299029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:17:34.503150  299029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:17:34.503218  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.511584  299029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 12:17:34.511654  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.519797  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.528223  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.537286  299029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:17:34.545551  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.554033  299029 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.567024  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.576523  299029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:17:34.583949  299029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:17:34.591577  299029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:17:34.696260  299029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:17:34.814713  299029 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:17:34.814848  299029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:17:34.818612  299029 start.go:563] Will wait 60s for crictl version
	I1020 12:17:34.818730  299029 ssh_runner.go:195] Run: which crictl
	I1020 12:17:34.822225  299029 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:17:34.849915  299029 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:17:34.850090  299029 ssh_runner.go:195] Run: crio --version
	I1020 12:17:34.882627  299029 ssh_runner.go:195] Run: crio --version
	I1020 12:17:34.913762  299029 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:17:34.916643  299029 cli_runner.go:164] Run: docker network inspect addons-399470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:17:34.932391  299029 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1020 12:17:34.936233  299029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:17:34.945605  299029 kubeadm.go:883] updating cluster {Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:17:34.945732  299029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:17:34.945794  299029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:17:34.986806  299029 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:17:34.986830  299029 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:17:34.986887  299029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:17:35.013805  299029 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:17:35.013831  299029 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:17:35.013840  299029 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1020 12:17:35.013929  299029 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-399470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:17:35.014017  299029 ssh_runner.go:195] Run: crio config
	I1020 12:17:35.085286  299029 cni.go:84] Creating CNI manager for ""
	I1020 12:17:35.085312  299029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:17:35.085334  299029 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:17:35.085358  299029 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-399470 NodeName:addons-399470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:17:35.085485  299029 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-399470"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:17:35.085569  299029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:17:35.094109  299029 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:17:35.094188  299029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:17:35.102253  299029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1020 12:17:35.115688  299029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:17:35.129492  299029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1020 12:17:35.142803  299029 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:17:35.146602  299029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:17:35.157150  299029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:17:35.276025  299029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:17:35.293368  299029 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470 for IP: 192.168.49.2
	I1020 12:17:35.293393  299029 certs.go:195] generating shared ca certs ...
	I1020 12:17:35.293410  299029 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:35.293604  299029 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 12:17:35.789098  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt ...
	I1020 12:17:35.789132  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt: {Name:mk100687b17b53131e0ad96dd826d6f897d4f422 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:35.789333  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key ...
	I1020 12:17:35.789346  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key: {Name:mkc65d9e10e235e5d5e977982a5ddd0c3440b521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:35.789436  299029 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 12:17:36.397692  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt ...
	I1020 12:17:36.397723  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt: {Name:mk53e5ae88cbe9f151f8c7f76ee9f32d78c9d216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.397918  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key ...
	I1020 12:17:36.397931  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key: {Name:mk9aef008c6b34655ab99530acbbbd634dfd5779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.398010  299029 certs.go:257] generating profile certs ...
	I1020 12:17:36.398068  299029 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.key
	I1020 12:17:36.398085  299029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt with IP's: []
	I1020 12:17:36.611936  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt ...
	I1020 12:17:36.611972  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: {Name:mk9d33fbc882caec5030ee07719998e29823b3c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.612164  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.key ...
	I1020 12:17:36.612177  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.key: {Name:mk821d742b4d8e0a258529e6c4b5fe608906fafa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.612251  299029 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9
	I1020 12:17:36.612274  299029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1020 12:17:36.963750  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9 ...
	I1020 12:17:36.963780  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9: {Name:mk621c277847c7a16ba8eebe5483bab8d9f18b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.963962  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9 ...
	I1020 12:17:36.963978  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9: {Name:mk9b7e396ff325e604f1bcd3fac4cf83fb2bd240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.964067  299029 certs.go:382] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt
	I1020 12:17:36.964145  299029 certs.go:386] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key
	I1020 12:17:36.964200  299029 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key
	I1020 12:17:36.964220  299029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt with IP's: []
	I1020 12:17:37.132444  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt ...
	I1020 12:17:37.132474  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt: {Name:mk9f53b1fc4bffe3eaccea535de5d059f57e4f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:37.132681  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key ...
	I1020 12:17:37.132694  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key: {Name:mk0dbdca144dc8b482affaba665cc25d72548a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:37.132880  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 12:17:37.132927  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 12:17:37.132956  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:17:37.132984  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 12:17:37.133606  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:17:37.152147  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 12:17:37.171012  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:17:37.189081  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 12:17:37.206829  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 12:17:37.224484  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:17:37.241518  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:17:37.259107  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:17:37.278180  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:17:37.295536  299029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:17:37.308306  299029 ssh_runner.go:195] Run: openssl version
	I1020 12:17:37.315002  299029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:17:37.323662  299029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:17:37.327726  299029 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:17:37.327841  299029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:17:37.376778  299029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:17:37.385292  299029 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:17:37.389045  299029 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:17:37.389094  299029 kubeadm.go:400] StartCluster: {Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:17:37.389176  299029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:17:37.389240  299029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:17:37.416094  299029 cri.go:89] found id: ""
	I1020 12:17:37.416180  299029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:17:37.424247  299029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:17:37.431841  299029 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:17:37.431949  299029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:17:37.439788  299029 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:17:37.439808  299029 kubeadm.go:157] found existing configuration files:
	
	I1020 12:17:37.439862  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:17:37.447607  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:17:37.447672  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:17:37.455145  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:17:37.462509  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:17:37.462648  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:17:37.469775  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:17:37.478991  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:17:37.479095  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:17:37.486275  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:17:37.493979  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:17:37.494117  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:17:37.501462  299029 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:17:37.539763  299029 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:17:37.539946  299029 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:17:37.570119  299029 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:17:37.570253  299029 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1020 12:17:37.570306  299029 kubeadm.go:318] OS: Linux
	I1020 12:17:37.570390  299029 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:17:37.570484  299029 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1020 12:17:37.570560  299029 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:17:37.570644  299029 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:17:37.570725  299029 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:17:37.570822  299029 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:17:37.570896  299029 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:17:37.570978  299029 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:17:37.571054  299029 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1020 12:17:37.654745  299029 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:17:37.654876  299029 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:17:37.654976  299029 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:17:37.663889  299029 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:17:37.670357  299029 out.go:252]   - Generating certificates and keys ...
	I1020 12:17:37.670471  299029 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:17:37.670548  299029 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:17:38.969651  299029 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:17:39.252870  299029 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:17:39.825944  299029 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:17:40.335493  299029 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:17:40.735291  299029 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:17:40.735668  299029 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-399470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 12:17:41.299058  299029 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:17:41.299433  299029 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-399470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 12:17:41.919399  299029 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:17:43.034137  299029 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:17:43.544017  299029 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:17:43.544301  299029 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:17:44.358635  299029 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:17:44.620628  299029 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:17:45.539093  299029 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:17:46.548465  299029 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:17:47.054938  299029 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:17:47.055725  299029 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:17:47.060239  299029 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:17:47.063786  299029 out.go:252]   - Booting up control plane ...
	I1020 12:17:47.063903  299029 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:17:47.063986  299029 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:17:47.064338  299029 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:17:47.079679  299029 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:17:47.079801  299029 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:17:47.087382  299029 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:17:47.087729  299029 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:17:47.087777  299029 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:17:47.221954  299029 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:17:47.222079  299029 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:17:49.223547  299029 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00188487s
	I1020 12:17:49.227391  299029 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:17:49.227506  299029 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1020 12:17:49.227821  299029 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:17:49.228002  299029 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:17:52.078746  299029 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.850327761s
	I1020 12:17:53.981280  299029 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.753555593s
	I1020 12:17:55.729941  299029 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502163524s
	I1020 12:17:55.750126  299029 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:17:55.765490  299029 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:17:55.778899  299029 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:17:55.779218  299029 kubeadm.go:318] [mark-control-plane] Marking the node addons-399470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:17:55.790743  299029 kubeadm.go:318] [bootstrap-token] Using token: ekj0pw.jhw5dgl2640j8feo
	I1020 12:17:55.795865  299029 out.go:252]   - Configuring RBAC rules ...
	I1020 12:17:55.796056  299029 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:17:55.798937  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:17:55.811415  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:17:55.817375  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:17:55.823909  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:17:55.831607  299029 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:17:56.139796  299029 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:17:56.588892  299029 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:17:57.137117  299029 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:17:57.138327  299029 kubeadm.go:318] 
	I1020 12:17:57.138422  299029 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:17:57.138433  299029 kubeadm.go:318] 
	I1020 12:17:57.138515  299029 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:17:57.138524  299029 kubeadm.go:318] 
	I1020 12:17:57.138552  299029 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:17:57.138618  299029 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:17:57.138675  299029 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:17:57.138684  299029 kubeadm.go:318] 
	I1020 12:17:57.138741  299029 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:17:57.138768  299029 kubeadm.go:318] 
	I1020 12:17:57.138822  299029 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:17:57.138831  299029 kubeadm.go:318] 
	I1020 12:17:57.138886  299029 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:17:57.138968  299029 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:17:57.139043  299029 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:17:57.139052  299029 kubeadm.go:318] 
	I1020 12:17:57.139141  299029 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:17:57.139224  299029 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:17:57.139231  299029 kubeadm.go:318] 
	I1020 12:17:57.139319  299029 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ekj0pw.jhw5dgl2640j8feo \
	I1020 12:17:57.139432  299029 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 \
	I1020 12:17:57.139458  299029 kubeadm.go:318] 	--control-plane 
	I1020 12:17:57.139466  299029 kubeadm.go:318] 
	I1020 12:17:57.139554  299029 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:17:57.139563  299029 kubeadm.go:318] 
	I1020 12:17:57.139649  299029 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ekj0pw.jhw5dgl2640j8feo \
	I1020 12:17:57.139761  299029 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 
	I1020 12:17:57.142838  299029 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1020 12:17:57.143128  299029 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 12:17:57.143262  299029 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:17:57.143289  299029 cni.go:84] Creating CNI manager for ""
	I1020 12:17:57.143297  299029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:17:57.146542  299029 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 12:17:57.149404  299029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:17:57.153341  299029 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:17:57.153412  299029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:17:57.166103  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:17:57.455191  299029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:17:57.455358  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:57.455504  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-399470 minikube.k8s.io/updated_at=2025_10_20T12_17_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=addons-399470 minikube.k8s.io/primary=true
	I1020 12:17:57.604161  299029 ops.go:34] apiserver oom_adj: -16
	I1020 12:17:57.604352  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:58.104460  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:58.605407  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:59.104456  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:59.605101  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:00.108674  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:00.605085  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:01.104900  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:01.604570  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:01.748232  299029 kubeadm.go:1113] duration metric: took 4.292935629s to wait for elevateKubeSystemPrivileges
	I1020 12:18:01.748261  299029 kubeadm.go:402] duration metric: took 24.359171245s to StartCluster
	I1020 12:18:01.748278  299029 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:18:01.748419  299029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:18:01.748830  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:18:01.749032  299029 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:18:01.749168  299029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:18:01.749405  299029 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:18:01.749447  299029 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1020 12:18:01.749538  299029 addons.go:69] Setting yakd=true in profile "addons-399470"
	I1020 12:18:01.749555  299029 addons.go:238] Setting addon yakd=true in "addons-399470"
	I1020 12:18:01.749576  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.750055  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.750555  299029 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-399470"
	I1020 12:18:01.750574  299029 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-399470"
	I1020 12:18:01.750598  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.751022  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.753853  299029 out.go:179] * Verifying Kubernetes components...
	I1020 12:18:01.755071  299029 addons.go:69] Setting registry=true in profile "addons-399470"
	I1020 12:18:01.755097  299029 addons.go:238] Setting addon registry=true in "addons-399470"
	I1020 12:18:01.755127  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.755564  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.755727  299029 addons.go:69] Setting registry-creds=true in profile "addons-399470"
	I1020 12:18:01.756298  299029 addons.go:238] Setting addon registry-creds=true in "addons-399470"
	I1020 12:18:01.756355  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.759057  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.755858  299029 addons.go:69] Setting storage-provisioner=true in profile "addons-399470"
	I1020 12:18:01.760251  299029 addons.go:238] Setting addon storage-provisioner=true in "addons-399470"
	I1020 12:18:01.760306  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.755869  299029 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-399470"
	I1020 12:18:01.755879  299029 addons.go:69] Setting volcano=true in profile "addons-399470"
	I1020 12:18:01.755885  299029 addons.go:69] Setting volumesnapshots=true in profile "addons-399470"
	I1020 12:18:01.756210  299029 addons.go:69] Setting ingress=true in profile "addons-399470"
	I1020 12:18:01.756220  299029 addons.go:69] Setting cloud-spanner=true in profile "addons-399470"
	I1020 12:18:01.756227  299029 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-399470"
	I1020 12:18:01.756234  299029 addons.go:69] Setting default-storageclass=true in profile "addons-399470"
	I1020 12:18:01.756241  299029 addons.go:69] Setting gcp-auth=true in profile "addons-399470"
	I1020 12:18:01.756248  299029 addons.go:69] Setting inspektor-gadget=true in profile "addons-399470"
	I1020 12:18:01.756253  299029 addons.go:69] Setting ingress-dns=true in profile "addons-399470"
	I1020 12:18:01.756270  299029 addons.go:69] Setting metrics-server=true in profile "addons-399470"
	I1020 12:18:01.756277  299029 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-399470"
	I1020 12:18:01.760797  299029 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-399470"
	I1020 12:18:01.760937  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.770591  299029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:18:01.771235  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.774978  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.790428  299029 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-399470"
	I1020 12:18:01.790807  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.791266  299029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-399470"
	I1020 12:18:01.791757  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.808569  299029 mustload.go:65] Loading cluster: addons-399470
	I1020 12:18:01.808788  299029 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:18:01.809040  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.814255  299029 addons.go:238] Setting addon volcano=true in "addons-399470"
	I1020 12:18:01.814378  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.814858  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.822604  299029 addons.go:238] Setting addon inspektor-gadget=true in "addons-399470"
	I1020 12:18:01.822668  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.823146  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.840499  299029 addons.go:238] Setting addon ingress-dns=true in "addons-399470"
	I1020 12:18:01.840569  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.841053  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.843072  299029 addons.go:238] Setting addon volumesnapshots=true in "addons-399470"
	I1020 12:18:01.843152  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.843694  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.860465  299029 addons.go:238] Setting addon metrics-server=true in "addons-399470"
	I1020 12:18:01.860523  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.861023  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.876140  299029 addons.go:238] Setting addon ingress=true in "addons-399470"
	I1020 12:18:01.876272  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.876886  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.895292  299029 addons.go:238] Setting addon cloud-spanner=true in "addons-399470"
	I1020 12:18:01.895348  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.896031  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.915272  299029 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-399470"
	I1020 12:18:01.915321  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.915784  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.917932  299029 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1020 12:18:01.921392  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1020 12:18:01.921462  299029 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1020 12:18:01.921572  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:01.975939  299029 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1020 12:18:01.978119  299029 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1020 12:18:01.978712  299029 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1020 12:18:01.979159  299029 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1020 12:18:01.993573  299029 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 12:18:01.993663  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1020 12:18:01.993763  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:01.994394  299029 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 12:18:01.994414  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1020 12:18:01.994458  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.003377  299029 addons.go:238] Setting addon default-storageclass=true in "addons-399470"
	I1020 12:18:02.003428  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:02.003889  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:02.005244  299029 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 12:18:02.005272  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1020 12:18:02.005335  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.032519  299029 out.go:179]   - Using image docker.io/registry:3.0.0
	I1020 12:18:02.051998  299029 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1020 12:18:02.052019  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1020 12:18:02.052085  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.076470  299029 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:18:02.077496  299029 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-399470"
	I1020 12:18:02.077533  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:02.077934  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:02.084182  299029 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:18:02.084217  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:18:02.084287  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.107216  299029 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1020 12:18:02.110878  299029 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 12:18:02.110908  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1020 12:18:02.110984  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.128217  299029 host.go:66] Checking if "addons-399470" exists ...
	W1020 12:18:02.128564  299029 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1020 12:18:02.133714  299029 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1020 12:18:02.148058  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 12:18:02.148098  299029 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1020 12:18:02.148170  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.196151  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1020 12:18:02.199079  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1020 12:18:02.203980  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 12:18:02.205564  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1020 12:18:02.205609  299029 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1020 12:18:02.205680  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.226284  299029 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1020 12:18:02.245234  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.248534  299029 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1020 12:18:02.248560  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1020 12:18:02.248624  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.249085  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 12:18:02.250330  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1020 12:18:02.260551  299029 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:18:02.278714  299029 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:18:02.278792  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.264728  299029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:18:02.283069  299029 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 12:18:02.283166  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1020 12:18:02.283234  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.308935  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1020 12:18:02.313457  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1020 12:18:02.316496  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1020 12:18:02.319650  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1020 12:18:02.322570  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1020 12:18:02.325458  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1020 12:18:02.328470  299029 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1020 12:18:02.328902  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.328977  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.329368  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.333451  299029 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1020 12:18:02.333483  299029 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1020 12:18:02.333555  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.336474  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1020 12:18:02.339819  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1020 12:18:02.339845  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1020 12:18:02.339933  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.381473  299029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:18:02.381634  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.382646  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.384963  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.426712  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.429379  299029 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1020 12:18:02.434523  299029 out.go:179]   - Using image docker.io/busybox:stable
	I1020 12:18:02.439547  299029 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 12:18:02.439577  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1020 12:18:02.439668  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.441379  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.456468  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.469022  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.487802  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.506330  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.516228  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	W1020 12:18:02.523548  299029 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1020 12:18:02.523581  299029 retry.go:31] will retry after 362.188358ms: ssh: handshake failed: EOF
	I1020 12:18:02.528569  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.871320  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1020 12:18:02.871387  299029 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	W1020 12:18:02.887479  299029 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1020 12:18:02.887548  299029 retry.go:31] will retry after 483.199552ms: ssh: handshake failed: EOF
	I1020 12:18:03.048081  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1020 12:18:03.048108  299029 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1020 12:18:03.068093  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 12:18:03.100995  299029 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1020 12:18:03.101023  299029 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1020 12:18:03.119723  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1020 12:18:03.119748  299029 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1020 12:18:03.145407  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 12:18:03.145430  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1020 12:18:03.161679  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1020 12:18:03.161758  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1020 12:18:03.205288  299029 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1020 12:18:03.205313  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1020 12:18:03.213238  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 12:18:03.230620  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 12:18:03.230648  299029 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1020 12:18:03.234749  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 12:18:03.265828  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1020 12:18:03.292798  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1020 12:18:03.352099  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 12:18:03.356639  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:18:03.372994  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 12:18:03.376795  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:18:03.390876  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1020 12:18:03.398008  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 12:18:03.479754  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 12:18:03.479830  299029 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1020 12:18:03.481010  299029 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1020 12:18:03.481067  299029 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1020 12:18:03.481661  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1020 12:18:03.481707  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1020 12:18:03.609841  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 12:18:03.711195  299029 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1020 12:18:03.711226  299029 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1020 12:18:03.728790  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1020 12:18:03.728818  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1020 12:18:03.882752  299029 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1020 12:18:03.882809  299029 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1020 12:18:03.919376  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1020 12:18:03.919403  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1020 12:18:04.086935  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1020 12:18:04.086967  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1020 12:18:04.138521  299029 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:04.138605  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1020 12:18:04.142056  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1020 12:18:04.142130  299029 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1020 12:18:04.204448  299029 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.925366032s)
	I1020 12:18:04.204523  299029 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1020 12:18:04.205521  299029 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.824019711s)
	I1020 12:18:04.206229  299029 node_ready.go:35] waiting up to 6m0s for node "addons-399470" to be "Ready" ...
	I1020 12:18:04.350299  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:04.382846  299029 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 12:18:04.382923  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1020 12:18:04.410725  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1020 12:18:04.410792  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1020 12:18:04.599643  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1020 12:18:04.599668  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1020 12:18:04.606310  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 12:18:04.666475  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.598315257s)
	I1020 12:18:04.711368  299029 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-399470" context rescaled to 1 replicas
	I1020 12:18:04.810868  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1020 12:18:04.810905  299029 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1020 12:18:04.979428  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1020 12:18:04.979449  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1020 12:18:05.154185  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1020 12:18:05.154253  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1020 12:18:05.408700  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 12:18:05.408767  299029 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1020 12:18:05.569805  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1020 12:18:06.226811  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:06.435397  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.222123021s)
	I1020 12:18:06.435509  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.200735457s)
	I1020 12:18:06.435560  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.169669314s)
	I1020 12:18:06.541349  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.248457898s)
	I1020 12:18:06.545100  299029 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-399470 service yakd-dashboard -n yakd-dashboard
	
	I1020 12:18:07.140126  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.787944365s)
	I1020 12:18:07.278434  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.921719472s)
	I1020 12:18:07.278542  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.905452882s)
	I1020 12:18:07.278623  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.887669677s)
	I1020 12:18:07.278756  299029 addons.go:479] Verifying addon registry=true in "addons-399470"
	I1020 12:18:07.278659  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.901701352s)
	I1020 12:18:07.281821  299029 out.go:179] * Verifying registry addon...
	I1020 12:18:07.285779  299029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1020 12:18:07.300225  299029 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 12:18:07.300244  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:07.790641  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:08.020526  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.410640348s)
	I1020 12:18:08.020567  299029 addons.go:479] Verifying addon metrics-server=true in "addons-399470"
	I1020 12:18:08.020661  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.670272965s)
	W1020 12:18:08.020681  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:08.020697  299029 retry.go:31] will retry after 355.586941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:08.020742  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.622659292s)
	I1020 12:18:08.020756  299029 addons.go:479] Verifying addon ingress=true in "addons-399470"
	I1020 12:18:08.020888  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.414522374s)
	W1020 12:18:08.020976  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 12:18:08.021009  299029 retry.go:31] will retry after 288.111231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 12:18:08.024695  299029 out.go:179] * Verifying ingress addon...
	I1020 12:18:08.029707  299029 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1020 12:18:08.049433  299029 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1020 12:18:08.049458  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:08.297487  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:08.309758  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 12:18:08.377125  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:08.434092  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.86416577s)
	I1020 12:18:08.434194  299029 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-399470"
	I1020 12:18:08.439312  299029 out.go:179] * Verifying csi-hostpath-driver addon...
	I1020 12:18:08.443037  299029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1020 12:18:08.459972  299029 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 12:18:08.459999  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:08.558485  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:08.710913  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:08.800199  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:08.953455  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:09.033288  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:09.291071  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:09.446377  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:09.547188  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:09.738003  299029 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1020 12:18:09.738135  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:09.754637  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:09.789428  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:09.881996  299029 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1020 12:18:09.895181  299029 addons.go:238] Setting addon gcp-auth=true in "addons-399470"
	I1020 12:18:09.895233  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:09.895696  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:09.914194  299029 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1020 12:18:09.914265  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:09.934329  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:09.947324  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:10.033363  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:10.289120  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:10.445910  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:10.532935  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:10.788696  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:10.946611  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:11.033333  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:11.155016  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.845206385s)
	I1020 12:18:11.155120  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.777950466s)
	W1020 12:18:11.155150  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:11.155166  299029 retry.go:31] will retry after 504.015335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:11.155215  299029 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.241000553s)
	I1020 12:18:11.158570  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 12:18:11.161717  299029 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1020 12:18:11.164696  299029 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1020 12:18:11.164729  299029 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1020 12:18:11.178640  299029 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1020 12:18:11.178671  299029 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1020 12:18:11.192068  299029 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 12:18:11.192091  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1020 12:18:11.204777  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1020 12:18:11.211046  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:11.289765  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:11.447302  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:11.534416  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:11.659515  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:11.707260  299029 addons.go:479] Verifying addon gcp-auth=true in "addons-399470"
	I1020 12:18:11.710507  299029 out.go:179] * Verifying gcp-auth addon...
	I1020 12:18:11.714156  299029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1020 12:18:11.726296  299029 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1020 12:18:11.726374  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:11.825629  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:11.947091  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:12.033762  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:12.217102  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:12.289886  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:12.448134  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1020 12:18:12.524562  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:12.524593  299029 retry.go:31] will retry after 834.266364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:12.533656  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:12.717568  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:12.789428  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:12.946221  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:13.033376  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:13.217332  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:13.217699  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:13.289704  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:13.360018  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:13.447040  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:13.534016  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:13.717891  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:13.788951  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:13.946588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:14.033432  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:14.181717  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:14.181750  299029 retry.go:31] will retry after 701.065345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:14.217449  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:14.289215  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:14.446628  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:14.532447  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:14.716779  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:14.789497  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:14.883860  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:14.946104  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:15.033805  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:15.218145  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:15.289271  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:15.447199  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:15.533603  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:15.711011  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:15.718043  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:15.754348  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:15.754380  299029 retry.go:31] will retry after 1.128567092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:15.789478  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:15.946469  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:16.033525  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:16.217998  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:16.288799  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:16.446086  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:16.532909  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:16.717570  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:16.789301  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:16.883668  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:16.954157  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:17.034690  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:17.218110  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:17.289083  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:17.446801  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:17.534787  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:17.711416  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:17.730288  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:17.774124  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:17.774159  299029 retry.go:31] will retry after 1.578236639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:17.788727  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:17.946922  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:18.032854  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:18.217753  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:18.289921  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:18.446150  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:18.533126  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:18.717896  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:18.788946  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:18.946743  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:19.033565  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:19.216927  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:19.288894  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:19.353035  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:19.446407  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:19.535849  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:19.717686  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:19.789070  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:19.946038  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:20.034213  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:20.188233  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:20.188266  299029 retry.go:31] will retry after 3.193820206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1020 12:18:20.210248  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:20.217085  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:20.289131  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:20.446194  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:20.533010  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:20.717680  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:20.789434  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:20.946195  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:21.033277  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:21.217669  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:21.289701  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:21.446694  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:21.533387  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:21.718583  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:21.789642  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:21.946630  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:22.033581  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:22.210899  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:22.217552  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:22.289493  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:22.446191  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:22.533717  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:22.717379  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:22.789039  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:22.945877  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:23.033005  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:23.218203  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:23.289127  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:23.382316  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:23.452839  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:23.534059  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:23.718119  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:23.818949  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:23.946948  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:24.033661  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:24.211541  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:24.217325  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:24.225116  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:24.225147  299029 retry.go:31] will retry after 3.710606073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:24.288917  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:24.446161  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:24.533950  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:24.717502  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:24.789467  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:24.946286  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:25.033496  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:25.217506  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:25.289697  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:25.446711  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:25.533314  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:25.718245  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:25.788869  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:25.946996  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:26.033559  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:26.218006  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:26.288588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:26.446610  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:26.533575  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:26.710621  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:26.717202  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:26.789021  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:26.946727  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:27.032744  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:27.218219  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:27.289423  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:27.446911  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:27.548124  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:27.718031  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:27.789557  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:27.936688  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:27.947601  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:28.034202  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:28.217915  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:28.289133  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:28.445998  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:28.533466  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:28.711628  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:28.717680  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:28.736255  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:28.736339  299029 retry.go:31] will retry after 8.963590507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:28.789304  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:28.945837  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:29.033411  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:29.217853  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:29.288557  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:29.446705  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:29.534064  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:29.717483  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:29.789327  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:29.946129  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:30.033680  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:30.217317  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:30.289051  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:30.446387  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:30.533534  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:30.717056  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:30.788696  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:30.946481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:31.033377  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:31.210597  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:31.217541  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:31.289481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:31.446316  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:31.533828  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:31.717625  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:31.789442  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:31.946603  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:32.033448  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:32.217118  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:32.288717  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:32.447033  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:32.532884  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:32.717388  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:32.789590  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:32.946712  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:33.033874  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:33.217433  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:33.289427  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:33.446321  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:33.533284  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:33.710102  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:33.717149  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:33.788866  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:33.946711  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:34.032818  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:34.217956  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:34.289142  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:34.446367  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:34.533493  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:34.717903  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:34.789053  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:34.946617  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:35.033660  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:35.217404  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:35.289113  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:35.445924  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:35.532655  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:35.710891  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:35.718081  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:35.789590  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:35.946454  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:36.033433  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:36.217244  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:36.289050  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:36.446012  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:36.533255  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:36.717578  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:36.789680  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:36.946575  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:37.033516  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:37.218394  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:37.289424  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:37.446701  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:37.533348  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:37.700754  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1020 12:18:37.710997  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:37.719192  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:37.788961  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:37.946342  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:38.034641  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:38.218527  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:38.289984  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:38.447086  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1020 12:18:38.503693  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:38.503727  299029 retry.go:31] will retry after 13.986886357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:38.532549  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:38.717328  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:38.788918  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:38.946667  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:39.032670  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:39.216936  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:39.288408  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:39.446311  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:39.533549  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:39.717566  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:39.789134  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:39.946519  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:40.036127  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:40.210776  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:40.217610  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:40.289295  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:40.446483  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:40.533328  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:40.716978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:40.788617  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:40.946521  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:41.033838  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:41.218337  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:41.289109  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:41.446218  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:41.533265  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:41.719328  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:41.789019  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:41.946817  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:42.033016  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:42.211270  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:42.218139  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:42.288979  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:42.445974  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:42.533130  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:42.716909  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:42.788888  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:42.946705  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:43.033718  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:43.217712  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:43.289547  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:43.451997  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:43.550284  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:43.737467  299029 node_ready.go:49] node "addons-399470" is "Ready"
	I1020 12:18:43.737496  299029 node_ready.go:38] duration metric: took 39.530206121s for node "addons-399470" to be "Ready" ...
	I1020 12:18:43.737510  299029 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:18:43.737568  299029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:18:43.741060  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:43.765942  299029 api_server.go:72] duration metric: took 42.016875914s to wait for apiserver process to appear ...
	I1020 12:18:43.765967  299029 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:18:43.765988  299029 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1020 12:18:43.804346  299029 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1020 12:18:43.805868  299029 api_server.go:141] control plane version: v1.34.1
	I1020 12:18:43.805896  299029 api_server.go:131] duration metric: took 39.920295ms to wait for apiserver health ...
	I1020 12:18:43.805906  299029 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:18:43.842500  299029 system_pods.go:59] 19 kube-system pods found
	I1020 12:18:43.842536  299029 system_pods.go:61] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:43.842544  299029 system_pods.go:61] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending
	I1020 12:18:43.842550  299029 system_pods.go:61] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending
	I1020 12:18:43.842554  299029 system_pods.go:61] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending
	I1020 12:18:43.842558  299029 system_pods.go:61] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:43.842563  299029 system_pods.go:61] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:43.842571  299029 system_pods.go:61] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:43.842575  299029 system_pods.go:61] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:43.842591  299029 system_pods.go:61] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:43.842597  299029 system_pods.go:61] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:43.842608  299029 system_pods.go:61] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:43.842614  299029 system_pods.go:61] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:43.842619  299029 system_pods.go:61] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending
	I1020 12:18:43.842632  299029 system_pods.go:61] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:43.842640  299029 system_pods.go:61] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:43.842645  299029 system_pods.go:61] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending
	I1020 12:18:43.842650  299029 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending
	I1020 12:18:43.842658  299029 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending
	I1020 12:18:43.842663  299029 system_pods.go:61] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending
	I1020 12:18:43.842671  299029 system_pods.go:74] duration metric: took 36.759689ms to wait for pod list to return data ...
	I1020 12:18:43.842684  299029 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:18:43.842963  299029 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 12:18:43.842983  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:43.853397  299029 default_sa.go:45] found service account: "default"
	I1020 12:18:43.853425  299029 default_sa.go:55] duration metric: took 10.733541ms for default service account to be created ...
	I1020 12:18:43.853435  299029 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:18:43.882726  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:43.882772  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:43.882782  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:43.882788  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending
	I1020 12:18:43.882792  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending
	I1020 12:18:43.882796  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:43.882801  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:43.882806  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:43.882810  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:43.882817  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:43.882830  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:43.882835  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:43.882848  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:43.882853  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending
	I1020 12:18:43.882865  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:43.882872  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:43.882883  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending
	I1020 12:18:43.882887  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending
	I1020 12:18:43.882891  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending
	I1020 12:18:43.882895  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending
	I1020 12:18:43.882912  299029 retry.go:31] will retry after 254.407106ms: missing components: kube-dns
	I1020 12:18:43.986443  299029 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 12:18:43.986471  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:44.078371  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:44.151779  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:44.151818  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:44.151828  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:44.151836  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 12:18:44.151844  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 12:18:44.151848  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:44.151855  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:44.151860  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:44.151865  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:44.151871  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:44.151875  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:44.151889  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:44.151896  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:44.151913  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 12:18:44.151921  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:44.151935  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:44.151941  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 12:18:44.151945  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending
	I1020 12:18:44.151952  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.151962  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending
	I1020 12:18:44.151979  299029 retry.go:31] will retry after 304.358853ms: missing components: kube-dns
	I1020 12:18:44.228216  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:44.329956  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:44.447462  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:44.460796  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:44.460839  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:44.460848  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:44.460856  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 12:18:44.460864  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 12:18:44.460869  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:44.460875  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:44.460888  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:44.460898  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:44.460905  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:44.460914  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:44.460919  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:44.460932  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:44.460939  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 12:18:44.460955  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:44.460963  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:44.460969  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 12:18:44.460983  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.460989  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.460995  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:18:44.461012  299029 retry.go:31] will retry after 365.531083ms: missing components: kube-dns
	I1020 12:18:44.533103  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:44.716959  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:44.789779  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:44.848081  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:44.848117  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:44.848128  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:44.848136  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 12:18:44.848143  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 12:18:44.848147  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:44.848152  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:44.848156  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:44.848159  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:44.848166  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:44.848170  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:44.848174  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:44.848180  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:44.848187  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 12:18:44.848193  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:44.848198  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:44.848204  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 12:18:44.848210  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.848218  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.848225  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:18:44.848232  299029 system_pods.go:126] duration metric: took 994.79138ms to wait for k8s-apps to be running ...
	I1020 12:18:44.848240  299029 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:18:44.848297  299029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:18:44.876669  299029 system_svc.go:56] duration metric: took 28.403778ms WaitForService to wait for kubelet
	I1020 12:18:44.876705  299029 kubeadm.go:586] duration metric: took 43.127643779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:18:44.876728  299029 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:18:44.881702  299029 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 12:18:44.881734  299029 node_conditions.go:123] node cpu capacity is 2
	I1020 12:18:44.881749  299029 node_conditions.go:105] duration metric: took 5.014881ms to run NodePressure ...
	I1020 12:18:44.881762  299029 start.go:241] waiting for startup goroutines ...
	I1020 12:18:44.947145  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:45.048908  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:45.218794  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:45.291163  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:45.447373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:45.533120  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:45.718977  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:45.788734  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:45.947289  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:46.033223  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:46.218725  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:46.289678  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:46.447772  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:46.533450  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:46.717621  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:46.789755  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:46.947407  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:47.033612  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:47.217730  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:47.289367  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:47.446962  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:47.532549  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:47.718094  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:47.789494  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:47.947232  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:48.033635  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:48.218825  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:48.289231  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:48.447339  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:48.534806  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:48.718117  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:48.789383  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:48.951290  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:49.037049  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:49.217914  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:49.289505  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:49.447805  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:49.533389  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:49.717643  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:49.789957  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:49.947572  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:50.034383  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:50.217747  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:50.289056  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:50.446945  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:50.534234  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:50.717588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:50.789872  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:50.947494  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:51.034194  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:51.217396  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:51.290173  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:51.446972  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:51.533203  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:51.717452  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:51.789402  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:51.946663  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:52.032884  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:52.218303  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:52.289055  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:52.446354  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:52.491642  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:52.533527  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:52.719032  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:52.788904  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:52.946454  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:53.033880  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:53.218568  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:53.289273  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:53.446237  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:53.533581  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:53.576714  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.085034741s)
	W1020 12:18:53.576791  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:53.576825  299029 retry.go:31] will retry after 12.525708001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:53.718037  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:53.789279  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:53.946899  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:54.033528  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:54.217683  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:54.289595  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:54.447042  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:54.534061  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:54.718183  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:54.789212  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:54.946638  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:55.034956  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:55.217997  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:55.289465  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:55.447473  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:55.533712  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:55.718077  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:55.790119  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:55.946414  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:56.033926  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:56.218333  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:56.289016  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:56.446202  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:56.535116  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:56.718506  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:56.790080  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:56.951720  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:57.034134  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:57.218196  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:57.293166  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:57.449224  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:57.533838  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:57.718129  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:57.790228  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:57.946554  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:58.033960  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:58.217481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:58.292906  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:58.447629  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:58.533505  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:58.717648  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:58.789649  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:58.947069  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:59.033142  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:59.217592  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:59.289607  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:59.448207  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:59.533618  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:59.718003  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:59.791668  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:59.950591  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:00.036074  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:00.226448  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:00.293815  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:00.450926  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:00.533741  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:00.718913  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:00.790366  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:00.950534  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:01.035954  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:01.218345  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:01.289910  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:01.448405  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:01.534803  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:01.719602  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:01.794330  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:01.951865  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:02.033383  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:02.217234  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:02.289819  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:02.447916  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:02.533666  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:02.718129  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:02.790012  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:02.946158  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:03.033947  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:03.217409  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:03.289624  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:03.453633  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:03.557188  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:03.717072  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:03.790073  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:03.947133  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:04.033333  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:04.217055  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:04.289176  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:04.447424  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:04.547608  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:04.717778  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:04.790001  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:04.947628  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:05.048488  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:05.217169  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:05.289191  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:05.446730  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:05.533967  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:05.718257  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:05.819190  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:05.946594  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:06.033266  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:06.103351  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:19:06.226482  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:06.326925  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:06.446846  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:06.534165  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:06.718785  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:06.789619  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:06.949522  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:07.049116  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:07.115274  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.011881944s)
	W1020 12:19:07.115328  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:19:07.115400  299029 retry.go:31] will retry after 27.948632049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:19:07.217022  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:07.289257  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:07.446726  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:07.533265  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:07.717652  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:07.789807  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:07.947167  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:08.034066  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:08.217819  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:08.293028  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:08.447447  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:08.534054  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:08.718568  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:08.789460  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:08.946829  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:09.032913  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:09.221176  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:09.289145  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:09.446393  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:09.533611  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:09.717578  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:09.789559  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:09.946995  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:10.033556  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:10.218012  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:10.289380  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:10.447404  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:10.533944  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:10.718420  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:10.789154  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:10.947100  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:11.032795  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:11.217710  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:11.289976  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:11.445924  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:11.537645  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:11.718104  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:11.789224  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:11.947140  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:12.058666  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:12.218373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:12.289598  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:12.446929  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:12.534266  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:12.717482  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:12.790158  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:12.946243  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:13.035028  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:13.217715  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:13.289604  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:13.448049  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:13.533243  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:13.720849  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:13.790360  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:13.947429  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:14.049256  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:14.217895  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:14.288903  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:14.447993  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:14.532800  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:14.718006  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:14.790133  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:14.952578  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:15.039175  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:15.217964  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:15.289276  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:15.447455  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:15.534060  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:15.718518  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:15.789613  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:15.947259  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:16.033601  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:16.217808  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:16.318926  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:16.447032  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:16.532901  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:16.718430  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:16.789862  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:16.947652  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:17.034709  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:17.218277  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:17.289488  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:17.447289  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:17.533356  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:17.717373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:17.789731  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:17.947119  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:18.033467  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:18.218185  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:18.289553  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:18.447260  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:18.533895  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:18.718719  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:18.789283  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:18.947286  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:19.033629  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:19.217978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:19.289198  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:19.446715  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:19.533180  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:19.717363  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:19.790235  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:19.947014  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:20.033667  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:20.218272  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:20.295839  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:20.446156  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:20.533136  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:20.717667  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:20.790281  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:20.946819  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:21.033470  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:21.217862  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:21.290077  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:21.446422  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:21.533557  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:21.717799  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:21.789668  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:21.947125  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:22.033879  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:22.217580  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:22.289764  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:22.448087  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:22.533232  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:22.716978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:22.789488  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:22.947240  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:23.033745  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:23.218614  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:23.290080  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:23.447753  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:23.533730  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:23.718095  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:23.789230  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:23.946932  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:24.035949  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:24.218274  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:24.289277  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:24.447067  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:24.533349  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:24.717256  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:24.790852  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:24.948902  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:25.033212  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:25.217198  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:25.289380  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:25.446697  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:25.532667  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:25.717892  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:25.788811  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:25.946920  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:26.033045  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:26.217839  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:26.288922  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:26.445951  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:26.532920  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:26.718373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:26.789511  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:26.947990  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:27.033319  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:27.217852  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:27.289079  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:27.446526  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:27.533586  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:27.717689  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:27.789844  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:27.946919  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:28.033100  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:28.218186  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:28.289767  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:28.447148  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:28.533535  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:28.718749  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:28.788995  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:28.947013  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:29.047377  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:29.217248  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:29.289494  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:29.447333  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:29.533287  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:29.718018  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:29.818189  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:29.946675  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:30.039328  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:30.217478  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:30.290373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:30.446940  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:30.533115  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:30.717175  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:30.789159  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:30.947131  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:31.033580  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:31.217766  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:31.290882  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:31.447000  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:31.533737  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:31.718028  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:31.789697  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:31.947195  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:32.033600  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:32.217417  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:32.294504  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:32.447573  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:32.534254  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:32.718390  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:32.789388  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:32.949646  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:33.048099  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:33.218817  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:33.289050  299029 kapi.go:107] duration metric: took 1m26.003268603s to wait for kubernetes.io/minikube-addons=registry ...
	I1020 12:19:33.446440  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:33.533462  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:33.717281  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:33.947411  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:34.041922  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:34.217585  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:34.447149  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:34.533260  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:34.717785  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:34.948588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:35.032738  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:35.065123  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:19:35.218044  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:35.446531  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:35.533813  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:35.718523  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:35.947254  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:36.033733  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:36.219851  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:36.366688  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.301476599s)
	W1020 12:19:36.366721  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1020 12:19:36.366798  299029 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1020 12:19:36.447410  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:36.533419  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:36.717219  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:36.947721  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:37.033370  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:37.217656  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:37.447493  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:37.534048  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:37.717249  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:37.946825  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:38.033315  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:38.218799  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:38.446850  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:38.535854  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:38.718827  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:38.947780  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:39.034167  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:39.222988  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:39.451212  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:39.535950  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:39.723821  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:39.946308  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:40.033860  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:40.217227  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:40.447311  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:40.533197  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:40.717320  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:40.947280  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:41.033988  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:41.222842  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:41.447933  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:41.533426  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:41.719650  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:41.947322  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:42.034143  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:42.218071  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:42.466464  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:42.535812  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:42.718631  299029 kapi.go:107] duration metric: took 1m31.00447456s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1020 12:19:42.721968  299029 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-399470 cluster.
	I1020 12:19:42.724945  299029 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1020 12:19:42.728012  299029 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1020 12:19:42.947884  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:43.033201  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:43.446772  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:43.533316  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:43.946515  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:44.033809  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:44.446040  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:44.533398  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:44.947653  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:45.043366  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:45.447590  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:45.534263  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:45.947309  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:46.033604  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:46.447481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:46.533921  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:46.947371  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:47.039866  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:47.446978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:47.533224  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:47.950149  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:48.033652  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:48.448020  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:48.533128  299029 kapi.go:107] duration metric: took 1m40.503421446s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1020 12:19:48.946589  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:49.446567  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:49.964514  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:50.447631  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:50.947458  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:51.447368  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:51.947283  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:52.447315  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:52.947225  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:53.447661  299029 kapi.go:107] duration metric: took 1m45.004624048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1020 12:19:53.450591  299029 out.go:179] * Enabled addons: registry-creds, ingress-dns, nvidia-device-plugin, cloud-spanner, yakd, storage-provisioner-rancher, storage-provisioner, amd-gpu-device-plugin, default-storageclass, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1020 12:19:53.453477  299029 addons.go:514] duration metric: took 1m51.704007706s for enable addons: enabled=[registry-creds ingress-dns nvidia-device-plugin cloud-spanner yakd storage-provisioner-rancher storage-provisioner amd-gpu-device-plugin default-storageclass metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1020 12:19:53.453541  299029 start.go:246] waiting for cluster config update ...
	I1020 12:19:53.453566  299029 start.go:255] writing updated cluster config ...
	I1020 12:19:53.453907  299029 ssh_runner.go:195] Run: rm -f paused
	I1020 12:19:53.457563  299029 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:19:53.461089  299029 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p2nl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.467032  299029 pod_ready.go:94] pod "coredns-66bc5c9577-p2nl7" is "Ready"
	I1020 12:19:53.467066  299029 pod_ready.go:86] duration metric: took 5.947103ms for pod "coredns-66bc5c9577-p2nl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.469510  299029 pod_ready.go:83] waiting for pod "etcd-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.474371  299029 pod_ready.go:94] pod "etcd-addons-399470" is "Ready"
	I1020 12:19:53.474462  299029 pod_ready.go:86] duration metric: took 4.923393ms for pod "etcd-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.477150  299029 pod_ready.go:83] waiting for pod "kube-apiserver-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.482631  299029 pod_ready.go:94] pod "kube-apiserver-addons-399470" is "Ready"
	I1020 12:19:53.482665  299029 pod_ready.go:86] duration metric: took 5.488931ms for pod "kube-apiserver-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.485389  299029 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.861418  299029 pod_ready.go:94] pod "kube-controller-manager-addons-399470" is "Ready"
	I1020 12:19:53.861449  299029 pod_ready.go:86] duration metric: took 376.032787ms for pod "kube-controller-manager-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:54.062027  299029 pod_ready.go:83] waiting for pod "kube-proxy-vt5tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:54.460868  299029 pod_ready.go:94] pod "kube-proxy-vt5tz" is "Ready"
	I1020 12:19:54.460897  299029 pod_ready.go:86] duration metric: took 398.844013ms for pod "kube-proxy-vt5tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:54.661887  299029 pod_ready.go:83] waiting for pod "kube-scheduler-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:55.061275  299029 pod_ready.go:94] pod "kube-scheduler-addons-399470" is "Ready"
	I1020 12:19:55.061307  299029 pod_ready.go:86] duration metric: took 399.392845ms for pod "kube-scheduler-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:55.061320  299029 pod_ready.go:40] duration metric: took 1.603724571s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:19:55.121610  299029 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 12:19:55.124983  299029 out.go:179] * Done! kubectl is now configured to use "addons-399470" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 12:22:31 addons-399470 crio[830]: time="2025-10-20T12:22:31.024895635Z" level=info msg="Removed container 1b7430208c3f6f61600ef23d161d34daa7eb6790a43b6fc2b8524307f1fdf522: kube-system/registry-creds-764b6fb674-n7sjp/registry-creds" id=cd47796d-514e-4d32-94c8-52b29e7e06d4 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.054793352Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-cj9th/POD" id=e675c9cf-bb16-482d-8d48-49d6cafcdba5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.05486258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.068420244Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cj9th Namespace:default ID:0eaa285a05436dba906fea700f8d9002e0dd598af7e3bb223e1fcfdda8f93c5c UID:f0f692ca-94e4-4fea-bdd4-0871e90a6624 NetNS:/var/run/netns/0fb8c8fd-c4d4-43e9-a21d-db282d45579d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400243f1f8}] Aliases:map[]}"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.068710512Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-cj9th to CNI network \"kindnet\" (type=ptp)"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.085233949Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-cj9th Namespace:default ID:0eaa285a05436dba906fea700f8d9002e0dd598af7e3bb223e1fcfdda8f93c5c UID:f0f692ca-94e4-4fea-bdd4-0871e90a6624 NetNS:/var/run/netns/0fb8c8fd-c4d4-43e9-a21d-db282d45579d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400243f1f8}] Aliases:map[]}"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.085555061Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-cj9th for CNI network kindnet (type=ptp)"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.088361054Z" level=info msg="Ran pod sandbox 0eaa285a05436dba906fea700f8d9002e0dd598af7e3bb223e1fcfdda8f93c5c with infra container: default/hello-world-app-5d498dc89-cj9th/POD" id=e675c9cf-bb16-482d-8d48-49d6cafcdba5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.096765896Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9b466915-2f3e-4185-8136-045e3763adf1 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.097172958Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9b466915-2f3e-4185-8136-045e3763adf1 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.097362121Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=9b466915-2f3e-4185-8136-045e3763adf1 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.0981257Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6c5d5a8d-e7b0-49e2-b044-dc1eb420d9de name=/runtime.v1.ImageService/PullImage
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.101221141Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.820724654Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=6c5d5a8d-e7b0-49e2-b044-dc1eb420d9de name=/runtime.v1.ImageService/PullImage
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.821425815Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=19e78906-ad42-4ee9-b44b-e31fff3e0302 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.825464965Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c1f82fe2-6805-430d-ae58-92314eef5d21 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.833829536Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-cj9th/hello-world-app" id=948f3456-ed21-42ae-82f3-b8bebaf3dbc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.834090577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.846375913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.846737288Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6d1160e8c7337cbc9c5f57ed7ea5405973913de64cebb3f1f1d66bee4a36041a/merged/etc/passwd: no such file or directory"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.84683118Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6d1160e8c7337cbc9c5f57ed7ea5405973913de64cebb3f1f1d66bee4a36041a/merged/etc/group: no such file or directory"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.847168801Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.869597241Z" level=info msg="Created container 2080039c6b620b106e1be4949a716077202ad1dee288009d08b8abf687fec643: default/hello-world-app-5d498dc89-cj9th/hello-world-app" id=948f3456-ed21-42ae-82f3-b8bebaf3dbc5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.872821315Z" level=info msg="Starting container: 2080039c6b620b106e1be4949a716077202ad1dee288009d08b8abf687fec643" id=a96d73c3-e7a4-4ae7-bea5-e0c17d59d95f name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:22:58 addons-399470 crio[830]: time="2025-10-20T12:22:58.880258014Z" level=info msg="Started container" PID=7242 containerID=2080039c6b620b106e1be4949a716077202ad1dee288009d08b8abf687fec643 description=default/hello-world-app-5d498dc89-cj9th/hello-world-app id=a96d73c3-e7a4-4ae7-bea5-e0c17d59d95f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0eaa285a05436dba906fea700f8d9002e0dd598af7e3bb223e1fcfdda8f93c5c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	2080039c6b620       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   0eaa285a05436       hello-world-app-5d498dc89-cj9th             default
	1abc0afd4b905       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             29 seconds ago           Exited              registry-creds                           4                   a151d1f824bd9       registry-creds-764b6fb674-n7sjp             kube-system
	388e14df50340       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   77fe01197ef91       nginx                                       default
	90d50091d577d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   cd85c59d8eed4       busybox                                     default
	bf26f1feb82cc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	c1088bae9a808       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	2ecff662c7508       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	3d5b3fe12ffc9       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	040ca55c52db6       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   0dddc6caee92f       ingress-nginx-controller-675c5ddd98-gljcq   ingress-nginx
	cc9b2f965f599       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   abab1c0318c08       gcp-auth-78565c9fb4-k8ch2                   gcp-auth
	f3d67034b0eae       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   3d3926abfce8d       gadget-qgrgn                                gadget
	fe8a3095a471f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	a624518e6294a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   5b1914a1b577c       registry-proxy-btjgg                        kube-system
	079e485c9fbfe       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   3aec213078be0       nvidia-device-plugin-daemonset-q9xwr        kube-system
	cd2ca85339d1a       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    2                   24075e10b491d       ingress-nginx-admission-patch-4xdfj         ingress-nginx
	03790aafd94f7       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   87612f77f7eee       registry-6b586f9694-lvkpj                   kube-system
	97d5ef2a92aa8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   eed72b2eb0a7d       ingress-nginx-admission-create-sf6cv        ingress-nginx
	5e2819fa3e373       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   211267160f5e9       snapshot-controller-7d9fbc56b8-gl2q6        kube-system
	8df71fb091362       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   0a75d06d73a7d       snapshot-controller-7d9fbc56b8-9l4l2        kube-system
	b021b5e25fa3c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   de41a8a499d54       yakd-dashboard-5ff678cb9-xk78f              yakd-dashboard
	d0148fcb0cd20       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   f0fbc4b7fafd7       csi-hostpath-resizer-0                      kube-system
	f56add90136c7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	fee29d2ac0336       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   01b209fade911       local-path-provisioner-648f6765c9-8dnp7     local-path-storage
	65f042711da86       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   8c96751ff53bc       csi-hostpath-attacher-0                     kube-system
	f8f46e656fa1f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   825108adc2769       metrics-server-85b7d694d7-5rpk5             kube-system
	1576d096039be       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   5720a3f6f402f       kube-ingress-dns-minikube                   kube-system
	6e37e09210384       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   3fbbd3cf6d311       cloud-spanner-emulator-86bd5cbb97-lp67l     default
	1331a3ab9aa84       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   ba7b97887c0bb       storage-provisioner                         kube-system
	9d231cda83b6a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   bef13ab8dc885       coredns-66bc5c9577-p2nl7                    kube-system
	1e45f17a364d1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   f4e19e684f48d       kindnet-s7r92                               kube-system
	559bae86282f4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   f323733abfd62       kube-proxy-vt5tz                            kube-system
	20bd22af6ef5b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   82fdfbc19dee1       kube-controller-manager-addons-399470       kube-system
	70cb33ebef465       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   c52909663e5aa       etcd-addons-399470                          kube-system
	cb73c63d85142       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   a929f24cbe9b0       kube-scheduler-addons-399470                kube-system
	e7b4d0b02797f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   2d21c6ef8dd4d       kube-apiserver-addons-399470                kube-system
	
	
	==> coredns [9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842] <==
	[INFO] 10.244.0.17:55226 - 63680 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004640929s
	[INFO] 10.244.0.17:55226 - 56905 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000132539s
	[INFO] 10.244.0.17:55226 - 6071 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000088698s
	[INFO] 10.244.0.17:58418 - 24542 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162423s
	[INFO] 10.244.0.17:58418 - 24065 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000187785s
	[INFO] 10.244.0.17:52511 - 3388 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110139s
	[INFO] 10.244.0.17:52511 - 3593 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112182s
	[INFO] 10.244.0.17:36782 - 19695 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084743s
	[INFO] 10.244.0.17:36782 - 19201 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078648s
	[INFO] 10.244.0.17:41207 - 41020 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004344597s
	[INFO] 10.244.0.17:41207 - 40766 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004318085s
	[INFO] 10.244.0.17:33849 - 19502 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132629s
	[INFO] 10.244.0.17:33849 - 19322 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000208396s
	[INFO] 10.244.0.21:56902 - 29850 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174861s
	[INFO] 10.244.0.21:38402 - 1059 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177192s
	[INFO] 10.244.0.21:54359 - 56094 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002262748s
	[INFO] 10.244.0.21:55804 - 64583 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001905255s
	[INFO] 10.244.0.21:54141 - 58 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000179317s
	[INFO] 10.244.0.21:56647 - 30890 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000361449s
	[INFO] 10.244.0.21:41928 - 54889 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003086373s
	[INFO] 10.244.0.21:51032 - 12661 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002142148s
	[INFO] 10.244.0.21:35780 - 33125 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002302922s
	[INFO] 10.244.0.21:42170 - 54059 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001874215s
	[INFO] 10.244.0.23:46089 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000191519s
	[INFO] 10.244.0.23:36300 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096624s
	
	
	==> describe nodes <==
	Name:               addons-399470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-399470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=addons-399470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_17_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-399470
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-399470"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:17:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-399470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:22:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:22:42 +0000   Mon, 20 Oct 2025 12:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:22:42 +0000   Mon, 20 Oct 2025 12:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:22:42 +0000   Mon, 20 Oct 2025 12:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:22:42 +0000   Mon, 20 Oct 2025 12:18:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-399470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8a403dc7-d68b-4de1-8372-5565f302155c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     cloud-spanner-emulator-86bd5cbb97-lp67l      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  default                     hello-world-app-5d498dc89-cj9th              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-qgrgn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-78565c9fb4-k8ch2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gljcq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m52s
	  kube-system                 coredns-66bc5c9577-p2nl7                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m57s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpathplugin-zhlps                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 etcd-addons-399470                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m3s
	  kube-system                 kindnet-s7r92                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m58s
	  kube-system                 kube-apiserver-addons-399470                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-addons-399470        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-proxy-vt5tz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-addons-399470                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 metrics-server-85b7d694d7-5rpk5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m53s
	  kube-system                 nvidia-device-plugin-daemonset-q9xwr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 registry-6b586f9694-lvkpj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 registry-creds-764b6fb674-n7sjp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-proxy-btjgg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-9l4l2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 snapshot-controller-7d9fbc56b8-gl2q6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-8dnp7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xk78f               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m56s  kube-proxy       
	  Normal   Starting                 5m3s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m3s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s   kubelet          Node addons-399470 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s   kubelet          Node addons-399470 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s   kubelet          Node addons-399470 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m59s  node-controller  Node addons-399470 event: Registered Node addons-399470 in Controller
	  Normal   NodeReady                4m16s  kubelet          Node addons-399470 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct20 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016790] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.502629] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033585] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.794361] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.786595] kauditd_printk_skb: 36 callbacks suppressed
	[Oct20 11:29] hrtimer: interrupt took 3085842 ns
	[Oct20 12:16] kauditd_printk_skb: 8 callbacks suppressed
	[Oct20 12:17] overlayfs: idmapped layers are currently not supported
	[  +0.065938] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609] <==
	{"level":"warn","ts":"2025-10-20T12:17:52.694206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.711459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.737567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.776116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.785072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.796249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.820841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.837296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.850025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.864521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.881163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.904413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.918539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.934736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.954279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.979543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:53.004534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:53.041244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:53.136658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:08.762153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:08.779257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.844908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.858896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.888490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.902570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45576","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [cc9b2f965f599a7998cd982e5ff841803f0f76d5bf010b3a8798797b82c32bba] <==
	2025/10/20 12:19:41 GCP Auth Webhook started!
	2025/10/20 12:19:55 Ready to marshal response ...
	2025/10/20 12:19:55 Ready to write response ...
	2025/10/20 12:19:56 Ready to marshal response ...
	2025/10/20 12:19:56 Ready to write response ...
	2025/10/20 12:19:56 Ready to marshal response ...
	2025/10/20 12:19:56 Ready to write response ...
	2025/10/20 12:20:18 Ready to marshal response ...
	2025/10/20 12:20:18 Ready to write response ...
	2025/10/20 12:20:23 Ready to marshal response ...
	2025/10/20 12:20:23 Ready to write response ...
	2025/10/20 12:20:36 Ready to marshal response ...
	2025/10/20 12:20:36 Ready to write response ...
	2025/10/20 12:20:39 Ready to marshal response ...
	2025/10/20 12:20:39 Ready to write response ...
	2025/10/20 12:21:02 Ready to marshal response ...
	2025/10/20 12:21:02 Ready to write response ...
	2025/10/20 12:21:02 Ready to marshal response ...
	2025/10/20 12:21:02 Ready to write response ...
	2025/10/20 12:21:09 Ready to marshal response ...
	2025/10/20 12:21:09 Ready to write response ...
	2025/10/20 12:22:57 Ready to marshal response ...
	2025/10/20 12:22:57 Ready to write response ...
	
	
	==> kernel <==
	 12:22:59 up  2:05,  0 user,  load average: 0.39, 1.74, 2.66
	Linux addons-399470 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01] <==
	I1020 12:20:53.022811       1 main.go:301] handling current node
	I1020 12:21:03.020581       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:21:03.020721       1 main.go:301] handling current node
	I1020 12:21:13.019186       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:21:13.019225       1 main.go:301] handling current node
	I1020 12:21:23.022403       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:21:23.022516       1 main.go:301] handling current node
	I1020 12:21:33.021243       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:21:33.021283       1 main.go:301] handling current node
	I1020 12:21:43.020870       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:21:43.020927       1 main.go:301] handling current node
	I1020 12:21:53.021304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:21:53.021433       1 main.go:301] handling current node
	I1020 12:22:03.027717       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:22:03.027828       1 main.go:301] handling current node
	I1020 12:22:13.024275       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:22:13.024416       1 main.go:301] handling current node
	I1020 12:22:23.022150       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:22:23.022186       1 main.go:301] handling current node
	I1020 12:22:33.020167       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:22:33.020304       1 main.go:301] handling current node
	I1020 12:22:43.019866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:22:43.019903       1 main.go:301] handling current node
	I1020 12:22:53.024464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:22:53.024496       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c] <==
	W1020 12:19:12.936975       1 handler_proxy.go:99] no RequestInfo found in the context
	W1020 12:19:12.936980       1 handler_proxy.go:99] no RequestInfo found in the context
	E1020 12:19:12.937050       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1020 12:19:12.937063       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1020 12:19:12.937065       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1020 12:19:12.938077       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1020 12:19:16.952809       1 handler_proxy.go:99] no RequestInfo found in the context
	E1020 12:19:16.952886       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1020 12:19:16.953013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1020 12:19:17.001474       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1020 12:20:06.439594       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37652: use of closed network connection
	E1020 12:20:06.687219       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37690: use of closed network connection
	E1020 12:20:06.821641       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37696: use of closed network connection
	I1020 12:20:35.789877       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1020 12:20:36.179626       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1020 12:20:36.459510       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.58.183"}
	E1020 12:20:37.526697       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1020 12:20:47.600322       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1020 12:22:57.911150       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.223.210"}
	
	
	==> kube-controller-manager [20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b] <==
	I1020 12:18:00.870681       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:18:00.870696       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:18:00.872053       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 12:18:00.873261       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:18:00.872180       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:18:00.872250       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:18:00.873347       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:18:00.872133       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:18:00.880606       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1020 12:18:00.880678       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1020 12:18:00.880698       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1020 12:18:00.880714       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1020 12:18:00.880720       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1020 12:18:00.890750       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-399470" podCIDRs=["10.244.0.0/24"]
	E1020 12:18:06.618323       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1020 12:18:30.838094       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1020 12:18:30.838247       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1020 12:18:30.838285       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1020 12:18:30.869232       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1020 12:18:30.873442       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1020 12:18:30.938435       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:18:30.973790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:18:45.808706       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1020 12:19:00.944769       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1020 12:19:00.987240       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33] <==
	I1020 12:18:02.750469       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:18:02.834256       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:18:02.937576       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:18:02.937616       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1020 12:18:02.937690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:18:03.054576       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:18:03.054628       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:18:03.070125       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:18:03.070440       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:18:03.070456       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:18:03.073844       1 config.go:200] "Starting service config controller"
	I1020 12:18:03.074892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:18:03.074922       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:18:03.074927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:18:03.074950       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:18:03.074954       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:18:03.075712       1 config.go:309] "Starting node config controller"
	I1020 12:18:03.075775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:18:03.075806       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:18:03.176260       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:18:03.176305       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:18:03.176318       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c] <==
	E1020 12:17:53.972494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:17:53.972727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:17:53.972841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:17:53.973007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:17:53.973105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:17:53.973238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:17:53.973323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:17:53.973721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:17:53.973832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:17:53.976547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:17:53.976707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:17:53.976865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:17:53.976963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:17:53.977326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:17:53.977390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:17:54.781842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:17:54.872416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:17:54.874852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:17:54.918648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:17:54.924398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1020 12:17:54.961714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:17:55.014892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:17:55.043702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:17:55.121521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1020 12:17:57.440410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:21:56 addons-399470 kubelet[1290]: I1020 12:21:56.718481    1290 scope.go:117] "RemoveContainer" containerID="a26b460500b40686da76af33b04e900fff111aa90df07ea7c54bfc2d4fbd4c0c"
	Oct 20 12:21:56 addons-399470 kubelet[1290]: I1020 12:21:56.732250    1290 scope.go:117] "RemoveContainer" containerID="f104d9888c8c67dbef2730f269850c211feab486ba34c0eec53ad46c53c87b6e"
	Oct 20 12:22:04 addons-399470 kubelet[1290]: I1020 12:22:04.504221    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-n7sjp" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:22:04 addons-399470 kubelet[1290]: I1020 12:22:04.504292    1290 scope.go:117] "RemoveContainer" containerID="1b7430208c3f6f61600ef23d161d34daa7eb6790a43b6fc2b8524307f1fdf522"
	Oct 20 12:22:04 addons-399470 kubelet[1290]: E1020 12:22:04.504693    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-n7sjp_kube-system(0fec4edc-d24b-4dc6-889b-5f70e34b4061)\"" pod="kube-system/registry-creds-764b6fb674-n7sjp" podUID="0fec4edc-d24b-4dc6-889b-5f70e34b4061"
	Oct 20 12:22:17 addons-399470 kubelet[1290]: I1020 12:22:17.501449    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-n7sjp" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:22:17 addons-399470 kubelet[1290]: I1020 12:22:17.501529    1290 scope.go:117] "RemoveContainer" containerID="1b7430208c3f6f61600ef23d161d34daa7eb6790a43b6fc2b8524307f1fdf522"
	Oct 20 12:22:17 addons-399470 kubelet[1290]: E1020 12:22:17.501699    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-n7sjp_kube-system(0fec4edc-d24b-4dc6-889b-5f70e34b4061)\"" pod="kube-system/registry-creds-764b6fb674-n7sjp" podUID="0fec4edc-d24b-4dc6-889b-5f70e34b4061"
	Oct 20 12:22:30 addons-399470 kubelet[1290]: I1020 12:22:30.500783    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-btjgg" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:22:30 addons-399470 kubelet[1290]: I1020 12:22:30.502226    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-n7sjp" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:22:30 addons-399470 kubelet[1290]: I1020 12:22:30.502298    1290 scope.go:117] "RemoveContainer" containerID="1b7430208c3f6f61600ef23d161d34daa7eb6790a43b6fc2b8524307f1fdf522"
	Oct 20 12:22:31 addons-399470 kubelet[1290]: I1020 12:22:31.004971    1290 scope.go:117] "RemoveContainer" containerID="1b7430208c3f6f61600ef23d161d34daa7eb6790a43b6fc2b8524307f1fdf522"
	Oct 20 12:22:31 addons-399470 kubelet[1290]: I1020 12:22:31.005238    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-n7sjp" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:22:31 addons-399470 kubelet[1290]: I1020 12:22:31.005307    1290 scope.go:117] "RemoveContainer" containerID="1abc0afd4b905b8282405e76835a7e48f172d7a36d3bc5f14aa7820e475ca102"
	Oct 20 12:22:31 addons-399470 kubelet[1290]: E1020 12:22:31.005592    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-n7sjp_kube-system(0fec4edc-d24b-4dc6-889b-5f70e34b4061)\"" pod="kube-system/registry-creds-764b6fb674-n7sjp" podUID="0fec4edc-d24b-4dc6-889b-5f70e34b4061"
	Oct 20 12:22:44 addons-399470 kubelet[1290]: I1020 12:22:44.501969    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-n7sjp" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:22:44 addons-399470 kubelet[1290]: I1020 12:22:44.502493    1290 scope.go:117] "RemoveContainer" containerID="1abc0afd4b905b8282405e76835a7e48f172d7a36d3bc5f14aa7820e475ca102"
	Oct 20 12:22:44 addons-399470 kubelet[1290]: E1020 12:22:44.502708    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-n7sjp_kube-system(0fec4edc-d24b-4dc6-889b-5f70e34b4061)\"" pod="kube-system/registry-creds-764b6fb674-n7sjp" podUID="0fec4edc-d24b-4dc6-889b-5f70e34b4061"
	Oct 20 12:22:56 addons-399470 kubelet[1290]: E1020 12:22:56.670190    1290 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c2d6561c4ba09f336550afec55cf64d70381ac5197bcc0cdc239abae9e753e2a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c2d6561c4ba09f336550afec55cf64d70381ac5197bcc0cdc239abae9e753e2a/diff: no such file or directory, extraDiskErr: <nil>
	Oct 20 12:22:57 addons-399470 kubelet[1290]: I1020 12:22:57.500736    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-n7sjp" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:22:57 addons-399470 kubelet[1290]: I1020 12:22:57.500954    1290 scope.go:117] "RemoveContainer" containerID="1abc0afd4b905b8282405e76835a7e48f172d7a36d3bc5f14aa7820e475ca102"
	Oct 20 12:22:57 addons-399470 kubelet[1290]: E1020 12:22:57.501397    1290 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-n7sjp_kube-system(0fec4edc-d24b-4dc6-889b-5f70e34b4061)\"" pod="kube-system/registry-creds-764b6fb674-n7sjp" podUID="0fec4edc-d24b-4dc6-889b-5f70e34b4061"
	Oct 20 12:22:57 addons-399470 kubelet[1290]: I1020 12:22:57.826240    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f0f692ca-94e4-4fea-bdd4-0871e90a6624-gcp-creds\") pod \"hello-world-app-5d498dc89-cj9th\" (UID: \"f0f692ca-94e4-4fea-bdd4-0871e90a6624\") " pod="default/hello-world-app-5d498dc89-cj9th"
	Oct 20 12:22:57 addons-399470 kubelet[1290]: I1020 12:22:57.826320    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcpjb\" (UniqueName: \"kubernetes.io/projected/f0f692ca-94e4-4fea-bdd4-0871e90a6624-kube-api-access-bcpjb\") pod \"hello-world-app-5d498dc89-cj9th\" (UID: \"f0f692ca-94e4-4fea-bdd4-0871e90a6624\") " pod="default/hello-world-app-5d498dc89-cj9th"
	Oct 20 12:22:59 addons-399470 kubelet[1290]: I1020 12:22:59.129565    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-cj9th" podStartSLOduration=1.4049132850000001 podStartE2EDuration="2.129548301s" podCreationTimestamp="2025-10-20 12:22:57 +0000 UTC" firstStartedPulling="2025-10-20 12:22:58.097682462 +0000 UTC m=+301.710170625" lastFinishedPulling="2025-10-20 12:22:58.82231747 +0000 UTC m=+302.434805641" observedRunningTime="2025-10-20 12:22:59.128932974 +0000 UTC m=+302.741421137" watchObservedRunningTime="2025-10-20 12:22:59.129548301 +0000 UTC m=+302.742036463"
	
	
	==> storage-provisioner [1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b] <==
	W1020 12:22:35.744276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:37.747236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:37.751728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:39.754422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:39.761087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:41.764029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:41.769591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:43.772200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:43.777025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:45.780989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:45.787319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:47.789948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:47.794177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:49.797383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:49.801493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:51.804353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:51.809099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:53.812418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:53.816816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:55.820182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:55.824601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:57.831127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:57.837412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:59.842380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:22:59.849650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-399470 -n addons-399470
helpers_test.go:269: (dbg) Run:  kubectl --context addons-399470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-399470 describe pod ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-399470 describe pod ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj: exit status 1 (93.159426ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sf6cv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4xdfj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-399470 describe pod ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (271.203037ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:23:01.347013  308687 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:23:01.347693  308687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:23:01.347706  308687 out.go:374] Setting ErrFile to fd 2...
	I1020 12:23:01.347712  308687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:23:01.347963  308687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:23:01.348255  308687 mustload.go:65] Loading cluster: addons-399470
	I1020 12:23:01.348703  308687 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:23:01.348721  308687 addons.go:606] checking whether the cluster is paused
	I1020 12:23:01.348832  308687 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:23:01.348851  308687 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:23:01.349348  308687 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:23:01.369658  308687 ssh_runner.go:195] Run: systemctl --version
	I1020 12:23:01.369730  308687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:23:01.388420  308687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:23:01.495548  308687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:23:01.495640  308687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:23:01.531887  308687 cri.go:89] found id: "1abc0afd4b905b8282405e76835a7e48f172d7a36d3bc5f14aa7820e475ca102"
	I1020 12:23:01.531966  308687 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:23:01.531986  308687 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:23:01.532008  308687 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:23:01.532044  308687 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:23:01.532070  308687 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:23:01.532090  308687 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:23:01.532108  308687 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:23:01.532142  308687 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:23:01.532164  308687 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:23:01.532185  308687 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:23:01.532205  308687 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:23:01.532240  308687 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:23:01.532258  308687 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:23:01.532279  308687 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:23:01.532300  308687 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:23:01.532346  308687 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:23:01.532397  308687 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:23:01.532422  308687 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:23:01.532439  308687 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:23:01.532448  308687 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:23:01.532451  308687 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:23:01.532454  308687 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:23:01.532457  308687 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:23:01.532460  308687 cri.go:89] found id: ""
	I1020 12:23:01.532522  308687 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:23:01.547541  308687 out.go:203] 
	W1020 12:23:01.550527  308687 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:23:01.550552  308687 out.go:285] * 
	* 
	W1020 12:23:01.557141  308687 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:23:01.560316  308687 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable ingress --alsologtostderr -v=1: exit status 11 (271.651716ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:23:01.618165  308734 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:23:01.619017  308734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:23:01.619054  308734 out.go:374] Setting ErrFile to fd 2...
	I1020 12:23:01.619079  308734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:23:01.619355  308734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:23:01.619683  308734 mustload.go:65] Loading cluster: addons-399470
	I1020 12:23:01.620067  308734 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:23:01.620115  308734 addons.go:606] checking whether the cluster is paused
	I1020 12:23:01.620246  308734 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:23:01.620287  308734 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:23:01.620822  308734 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:23:01.639136  308734 ssh_runner.go:195] Run: systemctl --version
	I1020 12:23:01.639186  308734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:23:01.669188  308734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:23:01.775096  308734 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:23:01.775268  308734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:23:01.805818  308734 cri.go:89] found id: "1abc0afd4b905b8282405e76835a7e48f172d7a36d3bc5f14aa7820e475ca102"
	I1020 12:23:01.805840  308734 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:23:01.805847  308734 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:23:01.805857  308734 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:23:01.805861  308734 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:23:01.805865  308734 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:23:01.805868  308734 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:23:01.805871  308734 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:23:01.805875  308734 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:23:01.805881  308734 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:23:01.805884  308734 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:23:01.805887  308734 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:23:01.805891  308734 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:23:01.805894  308734 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:23:01.805898  308734 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:23:01.805903  308734 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:23:01.805947  308734 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:23:01.805959  308734 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:23:01.805963  308734 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:23:01.805966  308734 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:23:01.805972  308734 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:23:01.805975  308734 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:23:01.805978  308734 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:23:01.805981  308734 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:23:01.805988  308734 cri.go:89] found id: ""
	I1020 12:23:01.806046  308734 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:23:01.820868  308734 out.go:203] 
	W1020 12:23:01.823687  308734 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:23:01.823714  308734 out.go:285] * 
	* 
	W1020 12:23:01.830115  308734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:23:01.833095  308734 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qgrgn" [d5c1fc8b-4847-49b9-ad83-339d608e1292] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003591564s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (308.355757ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:35.488857  306228 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:35.489709  306228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:35.489735  306228 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:35.489742  306228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:35.490034  306228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:35.490368  306228 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:35.490818  306228 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:35.490843  306228 addons.go:606] checking whether the cluster is paused
	I1020 12:20:35.490960  306228 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:35.490984  306228 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:35.491464  306228 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:35.512794  306228 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:35.512858  306228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:35.533700  306228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:35.639397  306228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:35.639500  306228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:35.699446  306228 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:35.699465  306228 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:35.699470  306228 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:35.699474  306228 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:35.699478  306228 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:35.699483  306228 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:35.699487  306228 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:35.699490  306228 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:35.699493  306228 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:35.699501  306228 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:35.699504  306228 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:35.699507  306228 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:35.699510  306228 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:35.699513  306228 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:35.699516  306228 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:35.699524  306228 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:35.699527  306228 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:35.699533  306228 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:35.699536  306228 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:35.699539  306228 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:35.699543  306228 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:35.699546  306228 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:35.699549  306228 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:35.699552  306228 cri.go:89] found id: ""
	I1020 12:20:35.699606  306228 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:35.720332  306228 out.go:203] 
	W1020 12:20:35.723086  306228 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:35.723119  306228 out.go:285] * 
	* 
	W1020 12:20:35.731776  306228 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:35.734668  306228 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.832005ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003694824s
addons_test.go:463: (dbg) Run:  kubectl --context addons-399470 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (278.007119ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:29.206500  306166 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:29.207810  306166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:29.207852  306166 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:29.207876  306166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:29.208203  306166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:29.208608  306166 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:29.209087  306166 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:29.209128  306166 addons.go:606] checking whether the cluster is paused
	I1020 12:20:29.209280  306166 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:29.209319  306166 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:29.209835  306166 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:29.229944  306166 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:29.230007  306166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:29.248831  306166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:29.359434  306166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:29.359528  306166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:29.396641  306166 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:29.396666  306166 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:29.396672  306166 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:29.396680  306166 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:29.396684  306166 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:29.396727  306166 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:29.396743  306166 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:29.396748  306166 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:29.396751  306166 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:29.396763  306166 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:29.396805  306166 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:29.396818  306166 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:29.396823  306166 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:29.396852  306166 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:29.396883  306166 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:29.396910  306166 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:29.396920  306166 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:29.396955  306166 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:29.396970  306166 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:29.396974  306166 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:29.397001  306166 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:29.397015  306166 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:29.397018  306166 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:29.397040  306166 cri.go:89] found id: ""
	I1020 12:20:29.397152  306166 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:29.413566  306166 out.go:203] 
	W1020 12:20:29.416752  306166 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:29.416776  306166 out.go:285] * 
	* 
	W1020 12:20:29.423190  306166 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:29.426434  306166 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1020 12:20:10.242098  298259 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1020 12:20:10.251451  298259 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1020 12:20:10.251843  298259 kapi.go:107] duration metric: took 9.760231ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 9.975946ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-399470 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-399470 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d070aba0-9cab-4faa-b66f-23e6d6a715e3] Pending
helpers_test.go:352: "task-pv-pod" [d070aba0-9cab-4faa-b66f-23e6d6a715e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d070aba0-9cab-4faa-b66f-23e6d6a715e3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004236895s
addons_test.go:572: (dbg) Run:  kubectl --context addons-399470 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-399470 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-399470 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-399470 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-399470 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-399470 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-399470 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [f5a88fe7-176f-4196-be1f-c98d33933358] Pending
helpers_test.go:352: "task-pv-pod-restore" [f5a88fe7-176f-4196-be1f-c98d33933358] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [f5a88fe7-176f-4196-be1f-c98d33933358] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004142029s
addons_test.go:614: (dbg) Run:  kubectl --context addons-399470 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-399470 delete pod task-pv-pod-restore: (1.305029136s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-399470 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-399470 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (273.495898ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:48.084693  306883 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:48.085569  306883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:48.085619  306883 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:48.085640  306883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:48.085976  306883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:48.086347  306883 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:48.086768  306883 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:48.086813  306883 addons.go:606] checking whether the cluster is paused
	I1020 12:20:48.086951  306883 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:48.086996  306883 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:48.087522  306883 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:48.106139  306883 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:48.106198  306883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:48.125825  306883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:48.230941  306883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:48.231040  306883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:48.264207  306883 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:48.264228  306883 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:48.264233  306883 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:48.264238  306883 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:48.264242  306883 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:48.264245  306883 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:48.264248  306883 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:48.264252  306883 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:48.264254  306883 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:48.264261  306883 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:48.264264  306883 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:48.264267  306883 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:48.264270  306883 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:48.264273  306883 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:48.264276  306883 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:48.264281  306883 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:48.264284  306883 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:48.264288  306883 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:48.264291  306883 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:48.264294  306883 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:48.264300  306883 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:48.264303  306883 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:48.264307  306883 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:48.264310  306883 cri.go:89] found id: ""
	I1020 12:20:48.264412  306883 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:48.280258  306883 out.go:203] 
	W1020 12:20:48.283424  306883 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:48.283455  306883 out.go:285] * 
	* 
	W1020 12:20:48.290061  306883 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:48.293204  306883 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (259.081922ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:48.348139  306926 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:48.349109  306926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:48.349153  306926 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:48.349174  306926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:48.349604  306926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:48.350301  306926 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:48.350988  306926 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:48.351011  306926 addons.go:606] checking whether the cluster is paused
	I1020 12:20:48.351151  306926 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:48.351175  306926 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:48.351918  306926 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:48.370164  306926 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:48.370224  306926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:48.387385  306926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:48.491148  306926 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:48.491241  306926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:48.523961  306926 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:48.524026  306926 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:48.524047  306926 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:48.524068  306926 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:48.524104  306926 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:48.524125  306926 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:48.524143  306926 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:48.524163  306926 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:48.524184  306926 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:48.524228  306926 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:48.524255  306926 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:48.524275  306926 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:48.524295  306926 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:48.524316  306926 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:48.524344  306926 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:48.524418  306926 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:48.524433  306926 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:48.524440  306926 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:48.524443  306926 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:48.524446  306926 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:48.524453  306926 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:48.524457  306926 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:48.524460  306926 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:48.524463  306926 cri.go:89] found id: ""
	I1020 12:20:48.524546  306926 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:48.539963  306926 out.go:203] 
	W1020 12:20:48.542973  306926 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:48.543000  306926 out.go:285] * 
	* 
	W1020 12:20:48.549352  306926 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:48.553276  306926 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (38.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-399470 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-399470 --alsologtostderr -v=1: exit status 11 (288.031976ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:07.162964  305204 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:07.163881  305204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:07.163895  305204 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:07.163902  305204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:07.164207  305204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:07.164643  305204 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:07.165012  305204 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:07.165023  305204 addons.go:606] checking whether the cluster is paused
	I1020 12:20:07.165127  305204 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:07.165144  305204 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:07.165604  305204 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:07.185940  305204 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:07.186007  305204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:07.204024  305204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:07.311580  305204 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:07.311658  305204 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:07.349145  305204 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:07.349220  305204 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:07.349241  305204 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:07.349262  305204 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:07.349291  305204 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:07.349319  305204 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:07.349339  305204 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:07.349359  305204 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:07.349380  305204 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:07.349416  305204 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:07.349435  305204 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:07.349455  305204 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:07.349475  305204 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:07.349506  305204 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:07.349525  305204 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:07.349547  305204 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:07.349596  305204 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:07.349622  305204 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:07.349643  305204 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:07.349662  305204 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:07.349685  305204 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:07.349715  305204 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:07.349734  305204 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:07.349754  305204 cri.go:89] found id: ""
	I1020 12:20:07.349838  305204 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:07.368649  305204 out.go:203] 
	W1020 12:20:07.371522  305204 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:07.371549  305204 out.go:285] * 
	* 
	W1020 12:20:07.378148  305204 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:07.381221  305204 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-399470 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-399470
helpers_test.go:243: (dbg) docker inspect addons-399470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e",
	        "Created": "2025-10-20T12:17:30.309681277Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:17:30.379063183Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/hostname",
	        "HostsPath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/hosts",
	        "LogPath": "/var/lib/docker/containers/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e/feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e-json.log",
	        "Name": "/addons-399470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-399470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-399470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "feca8d58fd702e47c00e5aacc7e645de33ce160f65a01415e4f80e7ca669ef1e",
	                "LowerDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9f7cc9e743a0ee4922d5bf484897f5a2ee3b1487ccbbdc2d98acfeb6c319e8d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-399470",
	                "Source": "/var/lib/docker/volumes/addons-399470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-399470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-399470",
	                "name.minikube.sigs.k8s.io": "addons-399470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64501902fdd201d4a8cb029fd7ca931da996468a10fd70cf66a0e3976149cd7a",
	            "SandboxKey": "/var/run/docker/netns/64501902fdd2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-399470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:5f:2e:27:19:be",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ce96cfa0d925123f13aa5a319160f0921a5320860cfb9b4d9bc94640f9e40690",
	                    "EndpointID": "660f76ea179077c81f697755930b43000ebe63e4022ac0c4e06f324bd74e4900",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-399470",
	                        "feca8d58fd70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-399470 -n addons-399470
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-399470 logs -n 25: (1.440458531s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-509805 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-509805   │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │ 20 Oct 25 12:16 UTC │
	│ delete  │ -p download-only-509805                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-509805   │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │ 20 Oct 25 12:16 UTC │
	│ start   │ -o=json --download-only -p download-only-029467 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-029467   │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ delete  │ -p download-only-029467                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-029467   │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ delete  │ -p download-only-509805                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-509805   │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ delete  │ -p download-only-029467                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-029467   │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ start   │ --download-only -p download-docker-415037 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-415037 │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │                     │
	│ delete  │ -p download-docker-415037                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-415037 │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ start   │ --download-only -p binary-mirror-776162 --alsologtostderr --binary-mirror http://127.0.0.1:35451 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-776162   │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │                     │
	│ delete  │ -p binary-mirror-776162                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-776162   │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:17 UTC │
	│ addons  │ enable dashboard -p addons-399470                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │                     │
	│ addons  │ disable dashboard -p addons-399470                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │                     │
	│ start   │ -p addons-399470 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:17 UTC │ 20 Oct 25 12:19 UTC │
	│ addons  │ addons-399470 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:19 UTC │                     │
	│ addons  │ addons-399470 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	│ addons  │ enable headlamp -p addons-399470 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-399470          │ jenkins │ v1.37.0 │ 20 Oct 25 12:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:17:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:17:03.166956  299029 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:17:03.167079  299029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:17:03.167091  299029 out.go:374] Setting ErrFile to fd 2...
	I1020 12:17:03.167096  299029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:17:03.167373  299029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:17:03.167844  299029 out.go:368] Setting JSON to false
	I1020 12:17:03.168730  299029 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7174,"bootTime":1760955450,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 12:17:03.168807  299029 start.go:141] virtualization:  
	I1020 12:17:03.172171  299029 out.go:179] * [addons-399470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 12:17:03.175964  299029 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:17:03.176078  299029 notify.go:220] Checking for updates...
	I1020 12:17:03.181824  299029 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:17:03.184978  299029 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:17:03.187988  299029 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 12:17:03.190989  299029 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 12:17:03.193974  299029 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:17:03.197190  299029 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:17:03.230152  299029 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 12:17:03.230287  299029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:17:03.304607  299029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-20 12:17:03.287974184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:17:03.304720  299029 docker.go:318] overlay module found
	I1020 12:17:03.307882  299029 out.go:179] * Using the docker driver based on user configuration
	I1020 12:17:03.310749  299029 start.go:305] selected driver: docker
	I1020 12:17:03.310780  299029 start.go:925] validating driver "docker" against <nil>
	I1020 12:17:03.310796  299029 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:17:03.311553  299029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:17:03.377839  299029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-20 12:17:03.368779365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:17:03.378005  299029 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:17:03.378248  299029 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:17:03.381234  299029 out.go:179] * Using Docker driver with root privileges
	I1020 12:17:03.384054  299029 cni.go:84] Creating CNI manager for ""
	I1020 12:17:03.384133  299029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:17:03.384148  299029 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:17:03.384243  299029 start.go:349] cluster config:
	{Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1020 12:17:03.389439  299029 out.go:179] * Starting "addons-399470" primary control-plane node in "addons-399470" cluster
	I1020 12:17:03.392249  299029 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:17:03.395285  299029 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:17:03.398213  299029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:17:03.398293  299029 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 12:17:03.398334  299029 cache.go:58] Caching tarball of preloaded images
	I1020 12:17:03.398338  299029 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:17:03.398459  299029 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 12:17:03.398472  299029 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:17:03.398845  299029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/config.json ...
	I1020 12:17:03.398884  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/config.json: {Name:mk5fa3974ca8c54458c0ea6e39b79eac041c96b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:03.415327  299029 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 12:17:03.415469  299029 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1020 12:17:03.415503  299029 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1020 12:17:03.415513  299029 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1020 12:17:03.415524  299029 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1020 12:17:03.415529  299029 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1020 12:17:21.380747  299029 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1020 12:17:21.380786  299029 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:17:21.380829  299029 start.go:360] acquireMachinesLock for addons-399470: {Name:mk012d6cf29d0e9498230bc3f730a78d550291e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:17:21.380957  299029 start.go:364] duration metric: took 109.983µs to acquireMachinesLock for "addons-399470"
	I1020 12:17:21.380985  299029 start.go:93] Provisioning new machine with config: &{Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:17:21.381061  299029 start.go:125] createHost starting for "" (driver="docker")
	I1020 12:17:21.384445  299029 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1020 12:17:21.384677  299029 start.go:159] libmachine.API.Create for "addons-399470" (driver="docker")
	I1020 12:17:21.384729  299029 client.go:168] LocalClient.Create starting
	I1020 12:17:21.384842  299029 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 12:17:22.195478  299029 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 12:17:23.622043  299029 cli_runner.go:164] Run: docker network inspect addons-399470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 12:17:23.636749  299029 cli_runner.go:211] docker network inspect addons-399470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 12:17:23.636843  299029 network_create.go:284] running [docker network inspect addons-399470] to gather additional debugging logs...
	I1020 12:17:23.636870  299029 cli_runner.go:164] Run: docker network inspect addons-399470
	W1020 12:17:23.652590  299029 cli_runner.go:211] docker network inspect addons-399470 returned with exit code 1
	I1020 12:17:23.652625  299029 network_create.go:287] error running [docker network inspect addons-399470]: docker network inspect addons-399470: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-399470 not found
	I1020 12:17:23.652640  299029 network_create.go:289] output of [docker network inspect addons-399470]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-399470 not found
	
	** /stderr **
	I1020 12:17:23.652758  299029 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:17:23.669025  299029 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a24550}
	I1020 12:17:23.669069  299029 network_create.go:124] attempt to create docker network addons-399470 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1020 12:17:23.669131  299029 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-399470 addons-399470
	I1020 12:17:23.724793  299029 network_create.go:108] docker network addons-399470 192.168.49.0/24 created
	I1020 12:17:23.724827  299029 kic.go:121] calculated static IP "192.168.49.2" for the "addons-399470" container
	I1020 12:17:23.724916  299029 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 12:17:23.740653  299029 cli_runner.go:164] Run: docker volume create addons-399470 --label name.minikube.sigs.k8s.io=addons-399470 --label created_by.minikube.sigs.k8s.io=true
	I1020 12:17:23.757399  299029 oci.go:103] Successfully created a docker volume addons-399470
	I1020 12:17:23.757506  299029 cli_runner.go:164] Run: docker run --rm --name addons-399470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399470 --entrypoint /usr/bin/test -v addons-399470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 12:17:25.831159  299029 cli_runner.go:217] Completed: docker run --rm --name addons-399470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399470 --entrypoint /usr/bin/test -v addons-399470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.073596971s)
	I1020 12:17:25.831194  299029 oci.go:107] Successfully prepared a docker volume addons-399470
	I1020 12:17:25.831220  299029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:17:25.831238  299029 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 12:17:25.831305  299029 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-399470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 12:17:30.235995  299029 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-399470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.404636943s)
	I1020 12:17:30.236035  299029 kic.go:203] duration metric: took 4.404789954s to extract preloaded images to volume ...
	W1020 12:17:30.236182  299029 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 12:17:30.236296  299029 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 12:17:30.295422  299029 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-399470 --name addons-399470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-399470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-399470 --network addons-399470 --ip 192.168.49.2 --volume addons-399470:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 12:17:30.573434  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Running}}
	I1020 12:17:30.600030  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:17:30.619844  299029 cli_runner.go:164] Run: docker exec addons-399470 stat /var/lib/dpkg/alternatives/iptables
	I1020 12:17:30.672162  299029 oci.go:144] the created container "addons-399470" has a running status.
	I1020 12:17:30.672193  299029 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa...
	I1020 12:17:31.979368  299029 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 12:17:32.014554  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:17:32.031318  299029 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 12:17:32.031353  299029 kic_runner.go:114] Args: [docker exec --privileged addons-399470 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 12:17:32.074866  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:17:32.092942  299029 machine.go:93] provisionDockerMachine start ...
	I1020 12:17:32.093047  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.110353  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:32.110680  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:32.110695  299029 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:17:32.259922  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-399470
	
	I1020 12:17:32.259949  299029 ubuntu.go:182] provisioning hostname "addons-399470"
	I1020 12:17:32.260013  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.277534  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:32.277837  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:32.277853  299029 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-399470 && echo "addons-399470" | sudo tee /etc/hostname
	I1020 12:17:32.433240  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-399470
	
	I1020 12:17:32.433322  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.450483  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:32.450793  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:32.450813  299029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-399470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-399470/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-399470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:17:32.596399  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:17:32.596423  299029 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 12:17:32.596459  299029 ubuntu.go:190] setting up certificates
	I1020 12:17:32.596469  299029 provision.go:84] configureAuth start
	I1020 12:17:32.596526  299029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399470
	I1020 12:17:32.612783  299029 provision.go:143] copyHostCerts
	I1020 12:17:32.612872  299029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 12:17:32.613001  299029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 12:17:32.613066  299029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 12:17:32.613125  299029 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.addons-399470 san=[127.0.0.1 192.168.49.2 addons-399470 localhost minikube]
	I1020 12:17:32.886848  299029 provision.go:177] copyRemoteCerts
	I1020 12:17:32.886916  299029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:17:32.886988  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:32.903037  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.022250  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 12:17:33.040073  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1020 12:17:33.057449  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 12:17:33.075131  299029 provision.go:87] duration metric: took 478.638155ms to configureAuth
	I1020 12:17:33.075161  299029 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:17:33.075354  299029 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:17:33.075459  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.092243  299029 main.go:141] libmachine: Using SSH client type: native
	I1020 12:17:33.092612  299029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1020 12:17:33.092638  299029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:17:33.349846  299029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:17:33.349869  299029 machine.go:96] duration metric: took 1.256903785s to provisionDockerMachine
	I1020 12:17:33.349880  299029 client.go:171] duration metric: took 11.965140982s to LocalClient.Create
	I1020 12:17:33.349923  299029 start.go:167] duration metric: took 11.965244129s to libmachine.API.Create "addons-399470"
	I1020 12:17:33.349941  299029 start.go:293] postStartSetup for "addons-399470" (driver="docker")
	I1020 12:17:33.349953  299029 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:17:33.350043  299029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:17:33.350108  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.366577  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.473133  299029 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:17:33.476346  299029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:17:33.476390  299029 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:17:33.476402  299029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 12:17:33.476472  299029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 12:17:33.476501  299029 start.go:296] duration metric: took 126.551017ms for postStartSetup
	I1020 12:17:33.476822  299029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399470
	I1020 12:17:33.493094  299029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/config.json ...
	I1020 12:17:33.493398  299029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:17:33.493446  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.510056  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.609373  299029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:17:33.613978  299029 start.go:128] duration metric: took 12.232902505s to createHost
	I1020 12:17:33.614006  299029 start.go:83] releasing machines lock for "addons-399470", held for 12.233038842s
	I1020 12:17:33.614077  299029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-399470
	I1020 12:17:33.631448  299029 ssh_runner.go:195] Run: cat /version.json
	I1020 12:17:33.631510  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.631804  299029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:17:33.631874  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:17:33.653228  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.664148  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:17:33.756145  299029 ssh_runner.go:195] Run: systemctl --version
	I1020 12:17:33.848889  299029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:17:33.885627  299029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:17:33.889969  299029 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:17:33.890041  299029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:17:33.918264  299029 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 12:17:33.918341  299029 start.go:495] detecting cgroup driver to use...
	I1020 12:17:33.918411  299029 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 12:17:33.918487  299029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:17:33.934704  299029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:17:33.947217  299029 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:17:33.947283  299029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:17:33.964211  299029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:17:33.982445  299029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:17:34.099460  299029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:17:34.213310  299029 docker.go:234] disabling docker service ...
	I1020 12:17:34.213420  299029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:17:34.234090  299029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:17:34.247513  299029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:17:34.360786  299029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:17:34.476863  299029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:17:34.489355  299029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:17:34.503150  299029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:17:34.503218  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.511584  299029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 12:17:34.511654  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.519797  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.528223  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.537286  299029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:17:34.545551  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.554033  299029 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.567024  299029 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:17:34.576523  299029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:17:34.583949  299029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:17:34.591577  299029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:17:34.696260  299029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:17:34.814713  299029 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:17:34.814848  299029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:17:34.818612  299029 start.go:563] Will wait 60s for crictl version
	I1020 12:17:34.818730  299029 ssh_runner.go:195] Run: which crictl
	I1020 12:17:34.822225  299029 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:17:34.849915  299029 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:17:34.850090  299029 ssh_runner.go:195] Run: crio --version
	I1020 12:17:34.882627  299029 ssh_runner.go:195] Run: crio --version
	I1020 12:17:34.913762  299029 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:17:34.916643  299029 cli_runner.go:164] Run: docker network inspect addons-399470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:17:34.932391  299029 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1020 12:17:34.936233  299029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:17:34.945605  299029 kubeadm.go:883] updating cluster {Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:17:34.945732  299029 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:17:34.945794  299029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:17:34.986806  299029 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:17:34.986830  299029 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:17:34.986887  299029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:17:35.013805  299029 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:17:35.013831  299029 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:17:35.013840  299029 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1020 12:17:35.013929  299029 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-399470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:17:35.014017  299029 ssh_runner.go:195] Run: crio config
	I1020 12:17:35.085286  299029 cni.go:84] Creating CNI manager for ""
	I1020 12:17:35.085312  299029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:17:35.085334  299029 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:17:35.085358  299029 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-399470 NodeName:addons-399470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:17:35.085485  299029 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-399470"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:17:35.085569  299029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:17:35.094109  299029 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:17:35.094188  299029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:17:35.102253  299029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1020 12:17:35.115688  299029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:17:35.129492  299029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1020 12:17:35.142803  299029 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:17:35.146602  299029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 12:17:35.157150  299029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:17:35.276025  299029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:17:35.293368  299029 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470 for IP: 192.168.49.2
	I1020 12:17:35.293393  299029 certs.go:195] generating shared ca certs ...
	I1020 12:17:35.293410  299029 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:35.293604  299029 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 12:17:35.789098  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt ...
	I1020 12:17:35.789132  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt: {Name:mk100687b17b53131e0ad96dd826d6f897d4f422 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:35.789333  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key ...
	I1020 12:17:35.789346  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key: {Name:mkc65d9e10e235e5d5e977982a5ddd0c3440b521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:35.789436  299029 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 12:17:36.397692  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt ...
	I1020 12:17:36.397723  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt: {Name:mk53e5ae88cbe9f151f8c7f76ee9f32d78c9d216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.397918  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key ...
	I1020 12:17:36.397931  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key: {Name:mk9aef008c6b34655ab99530acbbbd634dfd5779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.398010  299029 certs.go:257] generating profile certs ...
	I1020 12:17:36.398068  299029 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.key
	I1020 12:17:36.398085  299029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt with IP's: []
	I1020 12:17:36.611936  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt ...
	I1020 12:17:36.611972  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: {Name:mk9d33fbc882caec5030ee07719998e29823b3c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.612164  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.key ...
	I1020 12:17:36.612177  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.key: {Name:mk821d742b4d8e0a258529e6c4b5fe608906fafa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.612251  299029 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9
	I1020 12:17:36.612274  299029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1020 12:17:36.963750  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9 ...
	I1020 12:17:36.963780  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9: {Name:mk621c277847c7a16ba8eebe5483bab8d9f18b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.963962  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9 ...
	I1020 12:17:36.963978  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9: {Name:mk9b7e396ff325e604f1bcd3fac4cf83fb2bd240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:36.964067  299029 certs.go:382] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt.e372c2f9 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt
	I1020 12:17:36.964145  299029 certs.go:386] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key.e372c2f9 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key
	I1020 12:17:36.964200  299029 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key
	I1020 12:17:36.964220  299029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt with IP's: []
	I1020 12:17:37.132444  299029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt ...
	I1020 12:17:37.132474  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt: {Name:mk9f53b1fc4bffe3eaccea535de5d059f57e4f96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:37.132681  299029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key ...
	I1020 12:17:37.132694  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key: {Name:mk0dbdca144dc8b482affaba665cc25d72548a9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:17:37.132880  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 12:17:37.132927  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 12:17:37.132956  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:17:37.132984  299029 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 12:17:37.133606  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:17:37.152147  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 12:17:37.171012  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:17:37.189081  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 12:17:37.206829  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1020 12:17:37.224484  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:17:37.241518  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:17:37.259107  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 12:17:37.278180  299029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:17:37.295536  299029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:17:37.308306  299029 ssh_runner.go:195] Run: openssl version
	I1020 12:17:37.315002  299029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:17:37.323662  299029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:17:37.327726  299029 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:17:37.327841  299029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:17:37.376778  299029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:17:37.385292  299029 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:17:37.389045  299029 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 12:17:37.389094  299029 kubeadm.go:400] StartCluster: {Name:addons-399470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-399470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:17:37.389176  299029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:17:37.389240  299029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:17:37.416094  299029 cri.go:89] found id: ""
	I1020 12:17:37.416180  299029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:17:37.424247  299029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:17:37.431841  299029 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 12:17:37.431949  299029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:17:37.439788  299029 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 12:17:37.439808  299029 kubeadm.go:157] found existing configuration files:
	
	I1020 12:17:37.439862  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 12:17:37.447607  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 12:17:37.447672  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 12:17:37.455145  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 12:17:37.462509  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 12:17:37.462648  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:17:37.469775  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 12:17:37.478991  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 12:17:37.479095  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:17:37.486275  299029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 12:17:37.493979  299029 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 12:17:37.494117  299029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:17:37.501462  299029 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 12:17:37.539763  299029 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 12:17:37.539946  299029 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 12:17:37.570119  299029 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 12:17:37.570253  299029 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1020 12:17:37.570306  299029 kubeadm.go:318] OS: Linux
	I1020 12:17:37.570390  299029 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 12:17:37.570484  299029 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1020 12:17:37.570560  299029 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 12:17:37.570644  299029 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 12:17:37.570725  299029 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 12:17:37.570822  299029 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 12:17:37.570896  299029 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 12:17:37.570978  299029 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 12:17:37.571054  299029 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1020 12:17:37.654745  299029 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 12:17:37.654876  299029 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 12:17:37.654976  299029 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 12:17:37.663889  299029 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 12:17:37.670357  299029 out.go:252]   - Generating certificates and keys ...
	I1020 12:17:37.670471  299029 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 12:17:37.670548  299029 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 12:17:38.969651  299029 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 12:17:39.252870  299029 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 12:17:39.825944  299029 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 12:17:40.335493  299029 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 12:17:40.735291  299029 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 12:17:40.735668  299029 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-399470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 12:17:41.299058  299029 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 12:17:41.299433  299029 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-399470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1020 12:17:41.919399  299029 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 12:17:43.034137  299029 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 12:17:43.544017  299029 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 12:17:43.544301  299029 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 12:17:44.358635  299029 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 12:17:44.620628  299029 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 12:17:45.539093  299029 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 12:17:46.548465  299029 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 12:17:47.054938  299029 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 12:17:47.055725  299029 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 12:17:47.060239  299029 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 12:17:47.063786  299029 out.go:252]   - Booting up control plane ...
	I1020 12:17:47.063903  299029 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 12:17:47.063986  299029 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 12:17:47.064338  299029 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 12:17:47.079679  299029 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 12:17:47.079801  299029 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 12:17:47.087382  299029 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 12:17:47.087729  299029 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 12:17:47.087777  299029 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 12:17:47.221954  299029 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 12:17:47.222079  299029 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 12:17:49.223547  299029 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.00188487s
	I1020 12:17:49.227391  299029 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 12:17:49.227506  299029 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1020 12:17:49.227821  299029 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 12:17:49.228002  299029 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 12:17:52.078746  299029 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.850327761s
	I1020 12:17:53.981280  299029 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.753555593s
	I1020 12:17:55.729941  299029 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502163524s
	I1020 12:17:55.750126  299029 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 12:17:55.765490  299029 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 12:17:55.778899  299029 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 12:17:55.779218  299029 kubeadm.go:318] [mark-control-plane] Marking the node addons-399470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 12:17:55.790743  299029 kubeadm.go:318] [bootstrap-token] Using token: ekj0pw.jhw5dgl2640j8feo
	I1020 12:17:55.795865  299029 out.go:252]   - Configuring RBAC rules ...
	I1020 12:17:55.796056  299029 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 12:17:55.798937  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 12:17:55.811415  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 12:17:55.817375  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 12:17:55.823909  299029 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 12:17:55.831607  299029 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 12:17:56.139796  299029 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 12:17:56.588892  299029 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 12:17:57.137117  299029 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 12:17:57.138327  299029 kubeadm.go:318] 
	I1020 12:17:57.138422  299029 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 12:17:57.138433  299029 kubeadm.go:318] 
	I1020 12:17:57.138515  299029 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 12:17:57.138524  299029 kubeadm.go:318] 
	I1020 12:17:57.138552  299029 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 12:17:57.138618  299029 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 12:17:57.138675  299029 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 12:17:57.138684  299029 kubeadm.go:318] 
	I1020 12:17:57.138741  299029 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 12:17:57.138768  299029 kubeadm.go:318] 
	I1020 12:17:57.138822  299029 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 12:17:57.138831  299029 kubeadm.go:318] 
	I1020 12:17:57.138886  299029 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 12:17:57.138968  299029 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 12:17:57.139043  299029 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 12:17:57.139052  299029 kubeadm.go:318] 
	I1020 12:17:57.139141  299029 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 12:17:57.139224  299029 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 12:17:57.139231  299029 kubeadm.go:318] 
	I1020 12:17:57.139319  299029 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ekj0pw.jhw5dgl2640j8feo \
	I1020 12:17:57.139432  299029 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 \
	I1020 12:17:57.139458  299029 kubeadm.go:318] 	--control-plane 
	I1020 12:17:57.139466  299029 kubeadm.go:318] 
	I1020 12:17:57.139554  299029 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 12:17:57.139563  299029 kubeadm.go:318] 
	I1020 12:17:57.139649  299029 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ekj0pw.jhw5dgl2640j8feo \
	I1020 12:17:57.139761  299029 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 
	I1020 12:17:57.142838  299029 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1020 12:17:57.143128  299029 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 12:17:57.143262  299029 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 12:17:57.143289  299029 cni.go:84] Creating CNI manager for ""
	I1020 12:17:57.143297  299029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:17:57.146542  299029 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 12:17:57.149404  299029 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:17:57.153341  299029 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:17:57.153412  299029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:17:57.166103  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:17:57.455191  299029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:17:57.455358  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:57.455504  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-399470 minikube.k8s.io/updated_at=2025_10_20T12_17_57_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=addons-399470 minikube.k8s.io/primary=true
	I1020 12:17:57.604161  299029 ops.go:34] apiserver oom_adj: -16
	I1020 12:17:57.604352  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:58.104460  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:58.605407  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:59.104456  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:17:59.605101  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:00.108674  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:00.605085  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:01.104900  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:01.604570  299029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 12:18:01.748232  299029 kubeadm.go:1113] duration metric: took 4.292935629s to wait for elevateKubeSystemPrivileges
	I1020 12:18:01.748261  299029 kubeadm.go:402] duration metric: took 24.359171245s to StartCluster
	I1020 12:18:01.748278  299029 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:18:01.748419  299029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:18:01.748830  299029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:18:01.749032  299029 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:18:01.749168  299029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 12:18:01.749405  299029 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:18:01.749447  299029 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1020 12:18:01.749538  299029 addons.go:69] Setting yakd=true in profile "addons-399470"
	I1020 12:18:01.749555  299029 addons.go:238] Setting addon yakd=true in "addons-399470"
	I1020 12:18:01.749576  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.750055  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.750555  299029 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-399470"
	I1020 12:18:01.750574  299029 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-399470"
	I1020 12:18:01.750598  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.751022  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.753853  299029 out.go:179] * Verifying Kubernetes components...
	I1020 12:18:01.755071  299029 addons.go:69] Setting registry=true in profile "addons-399470"
	I1020 12:18:01.755097  299029 addons.go:238] Setting addon registry=true in "addons-399470"
	I1020 12:18:01.755127  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.755564  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.755727  299029 addons.go:69] Setting registry-creds=true in profile "addons-399470"
	I1020 12:18:01.756298  299029 addons.go:238] Setting addon registry-creds=true in "addons-399470"
	I1020 12:18:01.756355  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.759057  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.755858  299029 addons.go:69] Setting storage-provisioner=true in profile "addons-399470"
	I1020 12:18:01.760251  299029 addons.go:238] Setting addon storage-provisioner=true in "addons-399470"
	I1020 12:18:01.760306  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.755869  299029 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-399470"
	I1020 12:18:01.755879  299029 addons.go:69] Setting volcano=true in profile "addons-399470"
	I1020 12:18:01.755885  299029 addons.go:69] Setting volumesnapshots=true in profile "addons-399470"
	I1020 12:18:01.756210  299029 addons.go:69] Setting ingress=true in profile "addons-399470"
	I1020 12:18:01.756220  299029 addons.go:69] Setting cloud-spanner=true in profile "addons-399470"
	I1020 12:18:01.756227  299029 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-399470"
	I1020 12:18:01.756234  299029 addons.go:69] Setting default-storageclass=true in profile "addons-399470"
	I1020 12:18:01.756241  299029 addons.go:69] Setting gcp-auth=true in profile "addons-399470"
	I1020 12:18:01.756248  299029 addons.go:69] Setting inspektor-gadget=true in profile "addons-399470"
	I1020 12:18:01.756253  299029 addons.go:69] Setting ingress-dns=true in profile "addons-399470"
	I1020 12:18:01.756270  299029 addons.go:69] Setting metrics-server=true in profile "addons-399470"
	I1020 12:18:01.756277  299029 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-399470"
	I1020 12:18:01.760797  299029 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-399470"
	I1020 12:18:01.760937  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.770591  299029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:18:01.771235  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.774978  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.790428  299029 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-399470"
	I1020 12:18:01.790807  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.791266  299029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-399470"
	I1020 12:18:01.791757  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.808569  299029 mustload.go:65] Loading cluster: addons-399470
	I1020 12:18:01.808788  299029 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:18:01.809040  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.814255  299029 addons.go:238] Setting addon volcano=true in "addons-399470"
	I1020 12:18:01.814378  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.814858  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.822604  299029 addons.go:238] Setting addon inspektor-gadget=true in "addons-399470"
	I1020 12:18:01.822668  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.823146  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.840499  299029 addons.go:238] Setting addon ingress-dns=true in "addons-399470"
	I1020 12:18:01.840569  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.841053  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.843072  299029 addons.go:238] Setting addon volumesnapshots=true in "addons-399470"
	I1020 12:18:01.843152  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.843694  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.860465  299029 addons.go:238] Setting addon metrics-server=true in "addons-399470"
	I1020 12:18:01.860523  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.861023  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.876140  299029 addons.go:238] Setting addon ingress=true in "addons-399470"
	I1020 12:18:01.876272  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.876886  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.895292  299029 addons.go:238] Setting addon cloud-spanner=true in "addons-399470"
	I1020 12:18:01.895348  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.896031  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.915272  299029 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-399470"
	I1020 12:18:01.915321  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:01.915784  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:01.917932  299029 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1020 12:18:01.921392  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1020 12:18:01.921462  299029 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1020 12:18:01.921572  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:01.975939  299029 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1020 12:18:01.978119  299029 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1020 12:18:01.978712  299029 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1020 12:18:01.979159  299029 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1020 12:18:01.993573  299029 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 12:18:01.993663  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1020 12:18:01.993763  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:01.994394  299029 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 12:18:01.994414  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1020 12:18:01.994458  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.003377  299029 addons.go:238] Setting addon default-storageclass=true in "addons-399470"
	I1020 12:18:02.003428  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:02.003889  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:02.005244  299029 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 12:18:02.005272  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1020 12:18:02.005335  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.032519  299029 out.go:179]   - Using image docker.io/registry:3.0.0
	I1020 12:18:02.051998  299029 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1020 12:18:02.052019  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1020 12:18:02.052085  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.076470  299029 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:18:02.077496  299029 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-399470"
	I1020 12:18:02.077533  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:02.077934  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:02.084182  299029 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:18:02.084217  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:18:02.084287  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.107216  299029 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1020 12:18:02.110878  299029 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 12:18:02.110908  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1020 12:18:02.110984  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.128217  299029 host.go:66] Checking if "addons-399470" exists ...
	W1020 12:18:02.128564  299029 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1020 12:18:02.133714  299029 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1020 12:18:02.148058  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1020 12:18:02.148098  299029 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1020 12:18:02.148170  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.196151  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1020 12:18:02.199079  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1020 12:18:02.203980  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 12:18:02.205564  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1020 12:18:02.205609  299029 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1020 12:18:02.205680  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.226284  299029 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1020 12:18:02.245234  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.248534  299029 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1020 12:18:02.248560  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1020 12:18:02.248624  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.249085  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 12:18:02.250330  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1020 12:18:02.260551  299029 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:18:02.278714  299029 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:18:02.278792  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.264728  299029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 12:18:02.283069  299029 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 12:18:02.283166  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1020 12:18:02.283234  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.308935  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1020 12:18:02.313457  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1020 12:18:02.316496  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1020 12:18:02.319650  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1020 12:18:02.322570  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1020 12:18:02.325458  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1020 12:18:02.328470  299029 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1020 12:18:02.328902  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.328977  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.329368  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.333451  299029 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1020 12:18:02.333483  299029 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1020 12:18:02.333555  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.336474  299029 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1020 12:18:02.339819  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1020 12:18:02.339845  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1020 12:18:02.339933  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.381473  299029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:18:02.381634  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.382646  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.384963  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.426712  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.429379  299029 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1020 12:18:02.434523  299029 out.go:179]   - Using image docker.io/busybox:stable
	I1020 12:18:02.439547  299029 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 12:18:02.439577  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1020 12:18:02.439668  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:02.441379  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.456468  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.469022  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.487802  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.506330  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.516228  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	W1020 12:18:02.523548  299029 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1020 12:18:02.523581  299029 retry.go:31] will retry after 362.188358ms: ssh: handshake failed: EOF
	I1020 12:18:02.528569  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:02.871320  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1020 12:18:02.871387  299029 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	W1020 12:18:02.887479  299029 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1020 12:18:02.887548  299029 retry.go:31] will retry after 483.199552ms: ssh: handshake failed: EOF
	I1020 12:18:03.048081  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1020 12:18:03.048108  299029 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1020 12:18:03.068093  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1020 12:18:03.100995  299029 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1020 12:18:03.101023  299029 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1020 12:18:03.119723  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1020 12:18:03.119748  299029 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1020 12:18:03.145407  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1020 12:18:03.145430  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1020 12:18:03.161679  299029 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1020 12:18:03.161758  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1020 12:18:03.205288  299029 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1020 12:18:03.205313  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1020 12:18:03.213238  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1020 12:18:03.230620  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1020 12:18:03.230648  299029 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1020 12:18:03.234749  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1020 12:18:03.265828  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1020 12:18:03.292798  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1020 12:18:03.352099  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1020 12:18:03.356639  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:18:03.372994  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1020 12:18:03.376795  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:18:03.390876  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1020 12:18:03.398008  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1020 12:18:03.479754  299029 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 12:18:03.479830  299029 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1020 12:18:03.481010  299029 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1020 12:18:03.481067  299029 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1020 12:18:03.481661  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1020 12:18:03.481707  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1020 12:18:03.609841  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1020 12:18:03.711195  299029 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1020 12:18:03.711226  299029 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1020 12:18:03.728790  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1020 12:18:03.728818  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1020 12:18:03.882752  299029 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1020 12:18:03.882809  299029 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1020 12:18:03.919376  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1020 12:18:03.919403  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1020 12:18:04.086935  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1020 12:18:04.086967  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1020 12:18:04.138521  299029 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:04.138605  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1020 12:18:04.142056  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1020 12:18:04.142130  299029 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1020 12:18:04.204448  299029 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.925366032s)
	I1020 12:18:04.204523  299029 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1020 12:18:04.205521  299029 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.824019711s)
	I1020 12:18:04.206229  299029 node_ready.go:35] waiting up to 6m0s for node "addons-399470" to be "Ready" ...
	I1020 12:18:04.350299  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:04.382846  299029 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 12:18:04.382923  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1020 12:18:04.410725  299029 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1020 12:18:04.410792  299029 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1020 12:18:04.599643  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1020 12:18:04.599668  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1020 12:18:04.606310  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 12:18:04.666475  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.598315257s)
	I1020 12:18:04.711368  299029 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-399470" context rescaled to 1 replicas
	I1020 12:18:04.810868  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1020 12:18:04.810905  299029 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1020 12:18:04.979428  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1020 12:18:04.979449  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1020 12:18:05.154185  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1020 12:18:05.154253  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1020 12:18:05.408700  299029 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1020 12:18:05.408767  299029 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1020 12:18:05.569805  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1020 12:18:06.226811  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:06.435397  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.222123021s)
	I1020 12:18:06.435509  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.200735457s)
	I1020 12:18:06.435560  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.169669314s)
	I1020 12:18:06.541349  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.248457898s)
	I1020 12:18:06.545100  299029 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-399470 service yakd-dashboard -n yakd-dashboard
	
	I1020 12:18:07.140126  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.787944365s)
	I1020 12:18:07.278434  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.921719472s)
	I1020 12:18:07.278542  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.905452882s)
	I1020 12:18:07.278623  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.887669677s)
	I1020 12:18:07.278756  299029 addons.go:479] Verifying addon registry=true in "addons-399470"
	I1020 12:18:07.278659  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.901701352s)
	I1020 12:18:07.281821  299029 out.go:179] * Verifying registry addon...
	I1020 12:18:07.285779  299029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1020 12:18:07.300225  299029 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 12:18:07.300244  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:07.790641  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:08.020526  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.410640348s)
	I1020 12:18:08.020567  299029 addons.go:479] Verifying addon metrics-server=true in "addons-399470"
	I1020 12:18:08.020661  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.670272965s)
	W1020 12:18:08.020681  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:08.020697  299029 retry.go:31] will retry after 355.586941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:08.020742  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.622659292s)
	I1020 12:18:08.020756  299029 addons.go:479] Verifying addon ingress=true in "addons-399470"
	I1020 12:18:08.020888  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.414522374s)
	W1020 12:18:08.020976  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 12:18:08.021009  299029 retry.go:31] will retry after 288.111231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1020 12:18:08.024695  299029 out.go:179] * Verifying ingress addon...
	I1020 12:18:08.029707  299029 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1020 12:18:08.049433  299029 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1020 12:18:08.049458  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:08.297487  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:08.309758  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1020 12:18:08.377125  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:08.434092  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.86416577s)
	I1020 12:18:08.434194  299029 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-399470"
	I1020 12:18:08.439312  299029 out.go:179] * Verifying csi-hostpath-driver addon...
	I1020 12:18:08.443037  299029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1020 12:18:08.459972  299029 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 12:18:08.459999  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:08.558485  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:08.710913  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:08.800199  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:08.953455  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:09.033288  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:09.291071  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:09.446377  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:09.547188  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:09.738003  299029 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1020 12:18:09.738135  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:09.754637  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:09.789428  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:09.881996  299029 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1020 12:18:09.895181  299029 addons.go:238] Setting addon gcp-auth=true in "addons-399470"
	I1020 12:18:09.895233  299029 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:18:09.895696  299029 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:18:09.914194  299029 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1020 12:18:09.914265  299029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:18:09.934329  299029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:18:09.947324  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:10.033363  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:10.289120  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:10.445910  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:10.532935  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:10.788696  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:10.946611  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:11.033333  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:11.155016  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.845206385s)
	I1020 12:18:11.155120  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.777950466s)
	W1020 12:18:11.155150  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:11.155166  299029 retry.go:31] will retry after 504.015335ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:11.155215  299029 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.241000553s)
	I1020 12:18:11.158570  299029 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1020 12:18:11.161717  299029 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1020 12:18:11.164696  299029 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1020 12:18:11.164729  299029 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1020 12:18:11.178640  299029 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1020 12:18:11.178671  299029 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1020 12:18:11.192068  299029 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1020 12:18:11.192091  299029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1020 12:18:11.204777  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W1020 12:18:11.211046  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:11.289765  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:11.447302  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:11.534416  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:11.659515  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:11.707260  299029 addons.go:479] Verifying addon gcp-auth=true in "addons-399470"
	I1020 12:18:11.710507  299029 out.go:179] * Verifying gcp-auth addon...
	I1020 12:18:11.714156  299029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1020 12:18:11.726296  299029 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1020 12:18:11.726374  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:11.825629  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:11.947091  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:12.033762  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:12.217102  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:12.289886  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:12.448134  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1020 12:18:12.524562  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:12.524593  299029 retry.go:31] will retry after 834.266364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:12.533656  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:12.717568  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:12.789428  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:12.946221  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:13.033376  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:13.217332  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:13.217699  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:13.289704  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:13.360018  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:13.447040  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:13.534016  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:13.717891  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:13.788951  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:13.946588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:14.033432  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:14.181717  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:14.181750  299029 retry.go:31] will retry after 701.065345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:14.217449  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:14.289215  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:14.446628  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:14.532447  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:14.716779  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:14.789497  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:14.883860  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:14.946104  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:15.033805  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:15.218145  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:15.289271  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:15.447199  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:15.533603  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:15.711011  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:15.718043  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:15.754348  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:15.754380  299029 retry.go:31] will retry after 1.128567092s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:15.789478  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:15.946469  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:16.033525  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:16.217998  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:16.288799  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:16.446086  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:16.532909  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:16.717570  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:16.789301  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:16.883668  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:16.954157  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:17.034690  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:17.218110  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:17.289083  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:17.446801  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:17.534787  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:17.711416  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:17.730288  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:17.774124  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:17.774159  299029 retry.go:31] will retry after 1.578236639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:17.788727  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:17.946922  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:18.032854  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:18.217753  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:18.289921  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:18.446150  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:18.533126  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:18.717896  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:18.788946  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:18.946743  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:19.033565  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:19.216927  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:19.288894  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:19.353035  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:19.446407  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:19.535849  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:19.717686  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:19.789070  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:19.946038  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:20.034213  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:20.188233  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:20.188266  299029 retry.go:31] will retry after 3.193820206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1020 12:18:20.210248  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:20.217085  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:20.289131  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:20.446194  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:20.533010  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:20.717680  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:20.789434  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:20.946195  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:21.033277  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:21.217669  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:21.289701  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:21.446694  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:21.533387  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:21.718583  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:21.789642  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:21.946630  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:22.033581  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:22.210899  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:22.217552  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:22.289493  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:22.446191  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:22.533717  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:22.717379  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:22.789039  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:22.945877  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:23.033005  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:23.218203  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:23.289127  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:23.382316  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:23.452839  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:23.534059  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:23.718119  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:23.818949  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:23.946948  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:24.033661  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:24.211541  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:24.217325  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:24.225116  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:24.225147  299029 retry.go:31] will retry after 3.710606073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:24.288917  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:24.446161  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:24.533950  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:24.717502  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:24.789467  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:24.946286  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:25.033496  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:25.217506  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:25.289697  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:25.446711  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:25.533314  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:25.718245  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:25.788869  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:25.946996  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:26.033559  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:26.218006  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:26.288588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:26.446610  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:26.533575  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:26.710621  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:26.717202  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:26.789021  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:26.946727  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:27.032744  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:27.218219  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:27.289423  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:27.446911  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:27.548124  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:27.718031  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:27.789557  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:27.936688  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:27.947601  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:28.034202  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:28.217915  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:28.289133  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:28.445998  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:28.533466  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:28.711628  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:28.717680  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1020 12:18:28.736255  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:28.736339  299029 retry.go:31] will retry after 8.963590507s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:28.789304  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:28.945837  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:29.033411  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:29.217853  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:29.288557  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:29.446705  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:29.534064  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:29.717483  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:29.789327  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:29.946129  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:30.033680  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:30.217317  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:30.289051  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:30.446387  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:30.533534  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:30.717056  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:30.788696  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:30.946481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:31.033377  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:31.210597  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:31.217541  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:31.289481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:31.446316  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:31.533828  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:31.717625  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:31.789442  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:31.946603  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:32.033448  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:32.217118  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:32.288717  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:32.447033  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:32.532884  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:32.717388  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:32.789590  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:32.946712  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:33.033874  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:33.217433  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:33.289427  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:33.446321  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:33.533284  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:33.710102  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:33.717149  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:33.788866  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:33.946711  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:34.032818  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:34.217956  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:34.289142  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:34.446367  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:34.533493  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:34.717903  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:34.789053  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:34.946617  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:35.033660  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:35.217404  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:35.289113  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:35.445924  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:35.532655  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:35.710891  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:35.718081  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:35.789590  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:35.946454  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:36.033433  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:36.217244  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:36.289050  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:36.446012  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:36.533255  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:36.717578  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:36.789680  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:36.946575  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:37.033516  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:37.218394  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:37.289424  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:37.446701  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:37.533348  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:37.700754  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1020 12:18:37.710997  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:37.719192  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:37.788961  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:37.946342  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:38.034641  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:38.218527  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:38.289984  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:38.447086  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1020 12:18:38.503693  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:38.503727  299029 retry.go:31] will retry after 13.986886357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:38.532549  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:38.717328  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:38.788918  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:38.946667  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:39.032670  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:39.216936  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:39.288408  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:39.446311  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:39.533549  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:39.717566  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:39.789134  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:39.946519  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:40.036127  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:40.210776  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:40.217610  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:40.289295  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:40.446483  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:40.533328  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:40.716978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:40.788617  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:40.946521  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:41.033838  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:41.218337  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:41.289109  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:41.446218  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:41.533265  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:41.719328  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:41.789019  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:41.946817  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:42.033016  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1020 12:18:42.211270  299029 node_ready.go:57] node "addons-399470" has "Ready":"False" status (will retry)
	I1020 12:18:42.218139  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:42.288979  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:42.445974  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:42.533130  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:42.716909  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:42.788888  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:42.946705  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:43.033718  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:43.217712  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:43.289547  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:43.451997  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:43.550284  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:43.737467  299029 node_ready.go:49] node "addons-399470" is "Ready"
	I1020 12:18:43.737496  299029 node_ready.go:38] duration metric: took 39.530206121s for node "addons-399470" to be "Ready" ...
	I1020 12:18:43.737510  299029 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:18:43.737568  299029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:18:43.741060  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:43.765942  299029 api_server.go:72] duration metric: took 42.016875914s to wait for apiserver process to appear ...
	I1020 12:18:43.765967  299029 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:18:43.765988  299029 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1020 12:18:43.804346  299029 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1020 12:18:43.805868  299029 api_server.go:141] control plane version: v1.34.1
	I1020 12:18:43.805896  299029 api_server.go:131] duration metric: took 39.920295ms to wait for apiserver health ...
	I1020 12:18:43.805906  299029 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:18:43.842500  299029 system_pods.go:59] 19 kube-system pods found
	I1020 12:18:43.842536  299029 system_pods.go:61] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:43.842544  299029 system_pods.go:61] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending
	I1020 12:18:43.842550  299029 system_pods.go:61] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending
	I1020 12:18:43.842554  299029 system_pods.go:61] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending
	I1020 12:18:43.842558  299029 system_pods.go:61] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:43.842563  299029 system_pods.go:61] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:43.842571  299029 system_pods.go:61] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:43.842575  299029 system_pods.go:61] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:43.842591  299029 system_pods.go:61] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:43.842597  299029 system_pods.go:61] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:43.842608  299029 system_pods.go:61] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:43.842614  299029 system_pods.go:61] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:43.842619  299029 system_pods.go:61] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending
	I1020 12:18:43.842632  299029 system_pods.go:61] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:43.842640  299029 system_pods.go:61] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:43.842645  299029 system_pods.go:61] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending
	I1020 12:18:43.842650  299029 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending
	I1020 12:18:43.842658  299029 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending
	I1020 12:18:43.842663  299029 system_pods.go:61] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending
	I1020 12:18:43.842671  299029 system_pods.go:74] duration metric: took 36.759689ms to wait for pod list to return data ...
	I1020 12:18:43.842684  299029 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:18:43.842963  299029 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1020 12:18:43.842983  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:43.853397  299029 default_sa.go:45] found service account: "default"
	I1020 12:18:43.853425  299029 default_sa.go:55] duration metric: took 10.733541ms for default service account to be created ...
	I1020 12:18:43.853435  299029 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:18:43.882726  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:43.882772  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:43.882782  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:43.882788  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending
	I1020 12:18:43.882792  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending
	I1020 12:18:43.882796  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:43.882801  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:43.882806  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:43.882810  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:43.882817  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:43.882830  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:43.882835  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:43.882848  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:43.882853  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending
	I1020 12:18:43.882865  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:43.882872  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:43.882883  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending
	I1020 12:18:43.882887  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending
	I1020 12:18:43.882891  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending
	I1020 12:18:43.882895  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending
	I1020 12:18:43.882912  299029 retry.go:31] will retry after 254.407106ms: missing components: kube-dns
	I1020 12:18:43.986443  299029 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1020 12:18:43.986471  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:44.078371  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:44.151779  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:44.151818  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:44.151828  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:44.151836  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 12:18:44.151844  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 12:18:44.151848  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:44.151855  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:44.151860  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:44.151865  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:44.151871  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:44.151875  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:44.151889  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:44.151896  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:44.151913  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 12:18:44.151921  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:44.151935  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:44.151941  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 12:18:44.151945  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending
	I1020 12:18:44.151952  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.151962  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending
	I1020 12:18:44.151979  299029 retry.go:31] will retry after 304.358853ms: missing components: kube-dns
	I1020 12:18:44.228216  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:44.329956  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:44.447462  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:44.460796  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:44.460839  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:44.460848  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:44.460856  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 12:18:44.460864  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 12:18:44.460869  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:44.460875  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:44.460888  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:44.460898  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:44.460905  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:44.460914  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:44.460919  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:44.460932  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:44.460939  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 12:18:44.460955  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:44.460963  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:44.460969  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 12:18:44.460983  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.460989  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.460995  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:18:44.461012  299029 retry.go:31] will retry after 365.531083ms: missing components: kube-dns
	I1020 12:18:44.533103  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:44.716959  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:44.789779  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:44.848081  299029 system_pods.go:86] 19 kube-system pods found
	I1020 12:18:44.848117  299029 system_pods.go:89] "coredns-66bc5c9577-p2nl7" [92ddaf2d-c924-4ec1-9b5a-9bda00428616] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 12:18:44.848128  299029 system_pods.go:89] "csi-hostpath-attacher-0" [ca69e13c-1695-4b4b-b9b4-ec213f48eae0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1020 12:18:44.848136  299029 system_pods.go:89] "csi-hostpath-resizer-0" [1344870a-c5a5-4406-9c2c-9f3650f9d9f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1020 12:18:44.848143  299029 system_pods.go:89] "csi-hostpathplugin-zhlps" [8f4a4812-d16a-4abf-9249-051f266ee4aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1020 12:18:44.848147  299029 system_pods.go:89] "etcd-addons-399470" [eb1bc17f-88cb-46fc-a919-3f49514b466d] Running
	I1020 12:18:44.848152  299029 system_pods.go:89] "kindnet-s7r92" [c4bb99a6-28f5-484d-a51d-d2841bcf24dd] Running
	I1020 12:18:44.848156  299029 system_pods.go:89] "kube-apiserver-addons-399470" [1cc79c51-e56c-4b09-b9a7-9305edddd975] Running
	I1020 12:18:44.848159  299029 system_pods.go:89] "kube-controller-manager-addons-399470" [81eba53b-6232-4c63-902a-fa65290185da] Running
	I1020 12:18:44.848166  299029 system_pods.go:89] "kube-ingress-dns-minikube" [65d69002-52c5-489f-83d2-20f078130445] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1020 12:18:44.848170  299029 system_pods.go:89] "kube-proxy-vt5tz" [62734ec3-5dac-4d7c-926e-b132c28a5e5e] Running
	I1020 12:18:44.848174  299029 system_pods.go:89] "kube-scheduler-addons-399470" [ba002b90-de8b-413d-9666-beb65b10f89d] Running
	I1020 12:18:44.848180  299029 system_pods.go:89] "metrics-server-85b7d694d7-5rpk5" [8db8f8c6-d940-4e04-80c1-f44e9e4a7840] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1020 12:18:44.848187  299029 system_pods.go:89] "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1020 12:18:44.848193  299029 system_pods.go:89] "registry-6b586f9694-lvkpj" [4f75f2c1-0c8c-440d-833d-ec4585ebc94b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1020 12:18:44.848198  299029 system_pods.go:89] "registry-creds-764b6fb674-n7sjp" [0fec4edc-d24b-4dc6-889b-5f70e34b4061] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1020 12:18:44.848204  299029 system_pods.go:89] "registry-proxy-btjgg" [5a51d8bb-5258-4aa9-bd13-24321b7b2fcc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1020 12:18:44.848210  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9l4l2" [464eabd0-2884-4fbd-9655-f7fed1c15625] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.848218  299029 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gl2q6" [10c0a3a0-eddc-44a0-8cd4-cd9c060bc32a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1020 12:18:44.848225  299029 system_pods.go:89] "storage-provisioner" [10230d45-a804-47ae-a252-46cf8ab61dc4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 12:18:44.848232  299029 system_pods.go:126] duration metric: took 994.79138ms to wait for k8s-apps to be running ...
	I1020 12:18:44.848240  299029 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:18:44.848297  299029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:18:44.876669  299029 system_svc.go:56] duration metric: took 28.403778ms WaitForService to wait for kubelet
	I1020 12:18:44.876705  299029 kubeadm.go:586] duration metric: took 43.127643779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:18:44.876728  299029 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:18:44.881702  299029 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 12:18:44.881734  299029 node_conditions.go:123] node cpu capacity is 2
	I1020 12:18:44.881749  299029 node_conditions.go:105] duration metric: took 5.014881ms to run NodePressure ...
	I1020 12:18:44.881762  299029 start.go:241] waiting for startup goroutines ...
	I1020 12:18:44.947145  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:45.048908  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:45.218794  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:45.291163  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:45.447373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:45.533120  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:45.718977  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:45.788734  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:45.947289  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:46.033223  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:46.218725  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:46.289678  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:46.447772  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:46.533450  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:46.717621  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:46.789755  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:46.947407  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:47.033612  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:47.217730  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:47.289367  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:47.446962  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:47.532549  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:47.718094  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:47.789494  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:47.947232  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:48.033635  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:48.218825  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:48.289231  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:48.447339  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:48.534806  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:48.718117  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:48.789383  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:48.951290  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:49.037049  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:49.217914  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:49.289505  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:49.447805  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:49.533389  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:49.717643  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:49.789957  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:49.947572  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:50.034383  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:50.217747  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:50.289056  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:50.446945  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:50.534234  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:50.717588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:50.789872  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:50.947494  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:51.034194  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:51.217396  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:51.290173  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:51.446972  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:51.533203  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:51.717452  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:51.789402  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:51.946663  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:52.032884  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:52.218303  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:52.289055  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:52.446354  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:52.491642  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:18:52.533527  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:52.719032  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:52.788904  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:52.946454  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:53.033880  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:53.218568  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:53.289273  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:53.446237  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:53.533581  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:53.576714  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.085034741s)
	W1020 12:18:53.576791  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:53.576825  299029 retry.go:31] will retry after 12.525708001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:18:53.718037  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:53.789279  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:53.946899  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:54.033528  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:54.217683  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:54.289595  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:54.447042  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:54.534061  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:54.718183  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:54.789212  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:54.946638  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:55.034956  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:55.217997  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:55.289465  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:55.447473  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:55.533712  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:55.718077  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:55.790119  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:55.946414  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:56.033926  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:56.218333  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:56.289016  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:56.446202  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:56.535116  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:56.718506  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:56.790080  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:56.951720  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:57.034134  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:57.218196  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:57.293166  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:57.449224  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:57.533838  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:57.718129  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:57.790228  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:57.946554  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:58.033960  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:58.217481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:58.292906  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:58.447629  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:58.533505  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:58.717648  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:58.789649  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:58.947069  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:59.033142  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:59.217592  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:59.289607  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:59.448207  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:18:59.533618  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:18:59.718003  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:18:59.791668  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:18:59.950591  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:00.036074  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:00.226448  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:00.293815  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:00.450926  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:00.533741  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:00.718913  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:00.790366  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:00.950534  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:01.035954  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:01.218345  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:01.289910  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:01.448405  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:01.534803  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:01.719602  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:01.794330  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:01.951865  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:02.033383  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:02.217234  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:02.289819  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:02.447916  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:02.533666  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:02.718129  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:02.790012  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:02.946158  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:03.033947  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:03.217409  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:03.289624  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:03.453633  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:03.557188  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:03.717072  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:03.790073  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:03.947133  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:04.033333  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:04.217055  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:04.289176  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:04.447424  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:04.547608  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:04.717778  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:04.790001  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:04.947628  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:05.048488  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:05.217169  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:05.289191  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:05.446730  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:05.533967  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:05.718257  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:05.819190  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:05.946594  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:06.033266  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:06.103351  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:19:06.226482  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:06.326925  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:06.446846  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:06.534165  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:06.718785  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:06.789619  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:06.949522  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:07.049116  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:07.115274  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.011881944s)
	W1020 12:19:07.115328  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:19:07.115400  299029 retry.go:31] will retry after 27.948632049s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1020 12:19:07.217022  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:07.289257  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:07.446726  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:07.533265  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:07.717652  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:07.789807  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:07.947167  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:08.034066  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:08.217819  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:08.293028  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:08.447447  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:08.534054  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:08.718568  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:08.789460  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:08.946829  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:09.032913  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:09.221176  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:09.289145  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:09.446393  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:09.533611  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:09.717578  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:09.789559  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:09.946995  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:10.033556  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:10.218012  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:10.289380  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:10.447404  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:10.533944  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:10.718420  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:10.789154  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:10.947100  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:11.032795  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:11.217710  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:11.289976  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:11.445924  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:11.537645  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:11.718104  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:11.789224  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:11.947140  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:12.058666  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:12.218373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:12.289598  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:12.446929  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:12.534266  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:12.717482  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:12.790158  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:12.946243  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:13.035028  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:13.217715  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:13.289604  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:13.448049  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:13.533243  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:13.720849  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:13.790360  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:13.947429  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:14.049256  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:14.217895  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:14.288903  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:14.447993  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:14.532800  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:14.718006  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:14.790133  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:14.952578  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:15.039175  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:15.217964  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:15.289276  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:15.447455  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:15.534060  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:15.718518  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:15.789613  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:15.947259  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:16.033601  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:16.217808  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:16.318926  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:16.447032  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:16.532901  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:16.718430  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:16.789862  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:16.947652  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:17.034709  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:17.218277  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:17.289488  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:17.447289  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:17.533356  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:17.717373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:17.789731  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:17.947119  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:18.033467  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:18.218185  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:18.289553  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:18.447260  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:18.533895  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:18.718719  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:18.789283  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:18.947286  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:19.033629  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:19.217978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:19.289198  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:19.446715  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:19.533180  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:19.717363  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:19.790235  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:19.947014  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:20.033667  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:20.218272  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:20.295839  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:20.446156  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:20.533136  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:20.717667  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:20.790281  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:20.946819  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:21.033470  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:21.217862  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:21.290077  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:21.446422  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:21.533557  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:21.717799  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:21.789668  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:21.947125  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:22.033879  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:22.217580  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:22.289764  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:22.448087  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:22.533232  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:22.716978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:22.789488  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:22.947240  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:23.033745  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:23.218614  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:23.290080  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:23.447753  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:23.533730  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:23.718095  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:23.789230  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:23.946932  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:24.035949  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:24.218274  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:24.289277  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:24.447067  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:24.533349  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:24.717256  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:24.790852  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:24.948902  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:25.033212  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:25.217198  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:25.289380  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:25.446697  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:25.532667  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:25.717892  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:25.788811  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:25.946920  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:26.033045  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:26.217839  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:26.288922  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:26.445951  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:26.532920  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:26.718373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:26.789511  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:26.947990  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:27.033319  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:27.217852  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:27.289079  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:27.446526  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:27.533586  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:27.717689  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:27.789844  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:27.946919  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:28.033100  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:28.218186  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:28.289767  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:28.447148  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:28.533535  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:28.718749  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:28.788995  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:28.947013  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:29.047377  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:29.217248  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:29.289494  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:29.447333  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:29.533287  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:29.718018  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:29.818189  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:29.946675  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:30.039328  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:30.217478  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:30.290373  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:30.446940  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:30.533115  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:30.717175  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:30.789159  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:30.947131  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:31.033580  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:31.217766  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:31.290882  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:31.447000  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:31.533737  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:31.718028  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:31.789697  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:31.947195  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:32.033600  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:32.217417  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:32.294504  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:32.447573  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:32.534254  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:32.718390  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:32.789388  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1020 12:19:32.949646  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:33.048099  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:33.218817  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:33.289050  299029 kapi.go:107] duration metric: took 1m26.003268603s to wait for kubernetes.io/minikube-addons=registry ...
	I1020 12:19:33.446440  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:33.533462  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:33.717281  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:33.947411  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:34.041922  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:34.217585  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:34.447149  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:34.533260  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:34.717785  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:34.948588  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:35.032738  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:35.065123  299029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1020 12:19:35.218044  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:35.446531  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:35.533813  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:35.718523  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:35.947254  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:36.033733  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:36.219851  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:36.366688  299029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.301476599s)
	W1020 12:19:36.366721  299029 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1020 12:19:36.366798  299029 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1020 12:19:36.447410  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:36.533419  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:36.717219  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:36.947721  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:37.033370  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:37.217656  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:37.447493  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:37.534048  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:37.717249  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:37.946825  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:38.033315  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:38.218799  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:38.446850  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:38.535854  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:38.718827  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:38.947780  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:39.034167  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:39.222988  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:39.451212  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:39.535950  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:39.723821  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:39.946308  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:40.033860  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:40.217227  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:40.447311  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:40.533197  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:40.717320  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:40.947280  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:41.033988  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:41.222842  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:41.447933  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:41.533426  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:41.719650  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:41.947322  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:42.034143  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:42.218071  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1020 12:19:42.466464  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:42.535812  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:42.718631  299029 kapi.go:107] duration metric: took 1m31.00447456s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1020 12:19:42.721968  299029 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-399470 cluster.
	I1020 12:19:42.724945  299029 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1020 12:19:42.728012  299029 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1020 12:19:42.947884  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:43.033201  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:43.446772  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:43.533316  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:43.946515  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:44.033809  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:44.446040  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:44.533398  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:44.947653  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:45.043366  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:45.447590  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:45.534263  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:45.947309  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:46.033604  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:46.447481  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:46.533921  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:46.947371  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:47.039866  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:47.446978  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:47.533224  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:47.950149  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:48.033652  299029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1020 12:19:48.448020  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:48.533128  299029 kapi.go:107] duration metric: took 1m40.503421446s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1020 12:19:48.946589  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:49.446567  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:49.964514  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:50.447631  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:50.947458  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:51.447368  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:51.947283  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:52.447315  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:52.947225  299029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1020 12:19:53.447661  299029 kapi.go:107] duration metric: took 1m45.004624048s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1020 12:19:53.450591  299029 out.go:179] * Enabled addons: registry-creds, ingress-dns, nvidia-device-plugin, cloud-spanner, yakd, storage-provisioner-rancher, storage-provisioner, amd-gpu-device-plugin, default-storageclass, metrics-server, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1020 12:19:53.453477  299029 addons.go:514] duration metric: took 1m51.704007706s for enable addons: enabled=[registry-creds ingress-dns nvidia-device-plugin cloud-spanner yakd storage-provisioner-rancher storage-provisioner amd-gpu-device-plugin default-storageclass metrics-server volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1020 12:19:53.453541  299029 start.go:246] waiting for cluster config update ...
	I1020 12:19:53.453566  299029 start.go:255] writing updated cluster config ...
	I1020 12:19:53.453907  299029 ssh_runner.go:195] Run: rm -f paused
	I1020 12:19:53.457563  299029 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:19:53.461089  299029 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p2nl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.467032  299029 pod_ready.go:94] pod "coredns-66bc5c9577-p2nl7" is "Ready"
	I1020 12:19:53.467066  299029 pod_ready.go:86] duration metric: took 5.947103ms for pod "coredns-66bc5c9577-p2nl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.469510  299029 pod_ready.go:83] waiting for pod "etcd-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.474371  299029 pod_ready.go:94] pod "etcd-addons-399470" is "Ready"
	I1020 12:19:53.474462  299029 pod_ready.go:86] duration metric: took 4.923393ms for pod "etcd-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.477150  299029 pod_ready.go:83] waiting for pod "kube-apiserver-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.482631  299029 pod_ready.go:94] pod "kube-apiserver-addons-399470" is "Ready"
	I1020 12:19:53.482665  299029 pod_ready.go:86] duration metric: took 5.488931ms for pod "kube-apiserver-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.485389  299029 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:53.861418  299029 pod_ready.go:94] pod "kube-controller-manager-addons-399470" is "Ready"
	I1020 12:19:53.861449  299029 pod_ready.go:86] duration metric: took 376.032787ms for pod "kube-controller-manager-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:54.062027  299029 pod_ready.go:83] waiting for pod "kube-proxy-vt5tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:54.460868  299029 pod_ready.go:94] pod "kube-proxy-vt5tz" is "Ready"
	I1020 12:19:54.460897  299029 pod_ready.go:86] duration metric: took 398.844013ms for pod "kube-proxy-vt5tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:54.661887  299029 pod_ready.go:83] waiting for pod "kube-scheduler-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:55.061275  299029 pod_ready.go:94] pod "kube-scheduler-addons-399470" is "Ready"
	I1020 12:19:55.061307  299029 pod_ready.go:86] duration metric: took 399.392845ms for pod "kube-scheduler-addons-399470" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:19:55.061320  299029 pod_ready.go:40] duration metric: took 1.603724571s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:19:55.121610  299029 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 12:19:55.124983  299029 out.go:179] * Done! kubectl is now configured to use "addons-399470" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.564255854Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.569012575Z" level=info msg="Removing container: d101a20d45aee17c984a2731a6a2510c03bd01e4d0c2d68cf9d1e460f27e12b6" id=237ba96b-4a82-402d-81ae-7534fef1e611 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.572001774Z" level=info msg="Error loading conmon cgroup of container d101a20d45aee17c984a2731a6a2510c03bd01e4d0c2d68cf9d1e460f27e12b6: cgroup deleted" id=237ba96b-4a82-402d-81ae-7534fef1e611 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.594780351Z" level=info msg="Removed container d101a20d45aee17c984a2731a6a2510c03bd01e4d0c2d68cf9d1e460f27e12b6: gcp-auth/gcp-auth-certs-patch-hkpfg/patch" id=237ba96b-4a82-402d-81ae-7534fef1e611 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.599212218Z" level=info msg="Removing container: dc58967e7604bb9c907040bd795ba2de3e5df0f40c50d8388de65ce52020146d" id=ec6e62a3-a777-4659-99f3-a6d120a922b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.60244078Z" level=info msg="Error loading conmon cgroup of container dc58967e7604bb9c907040bd795ba2de3e5df0f40c50d8388de65ce52020146d: cgroup deleted" id=ec6e62a3-a777-4659-99f3-a6d120a922b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.614609475Z" level=info msg="Removed container dc58967e7604bb9c907040bd795ba2de3e5df0f40c50d8388de65ce52020146d: gcp-auth/gcp-auth-certs-create-zmhzz/create" id=ec6e62a3-a777-4659-99f3-a6d120a922b5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.618861631Z" level=info msg="Stopping pod sandbox: edbd6bf24e29594c74640856f67f7d65b1bb4fab9518f4df995b13497afbf9de" id=69c9a22a-69d6-4509-8716-d1467e428ef6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.618919601Z" level=info msg="Stopped pod sandbox (already stopped): edbd6bf24e29594c74640856f67f7d65b1bb4fab9518f4df995b13497afbf9de" id=69c9a22a-69d6-4509-8716-d1467e428ef6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.619369388Z" level=info msg="Removing pod sandbox: edbd6bf24e29594c74640856f67f7d65b1bb4fab9518f4df995b13497afbf9de" id=6f9bb0e9-e266-4367-a2cb-01a3a1acd83f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.631015705Z" level=info msg="Removed pod sandbox: edbd6bf24e29594c74640856f67f7d65b1bb4fab9518f4df995b13497afbf9de" id=6f9bb0e9-e266-4367-a2cb-01a3a1acd83f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.631803186Z" level=info msg="Stopping pod sandbox: 7569a3bdef5014cbc03b063d10e08b938d1f75293180e66b81fc22ad24ee0252" id=ce56edf3-3f06-4f55-b0a5-cea7f553c83a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.631849389Z" level=info msg="Stopped pod sandbox (already stopped): 7569a3bdef5014cbc03b063d10e08b938d1f75293180e66b81fc22ad24ee0252" id=ce56edf3-3f06-4f55-b0a5-cea7f553c83a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.632140609Z" level=info msg="Removing pod sandbox: 7569a3bdef5014cbc03b063d10e08b938d1f75293180e66b81fc22ad24ee0252" id=34ffd497-2b3b-4af3-9744-20dd7d220b71 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:19:56 addons-399470 crio[830]: time="2025-10-20T12:19:56.642906268Z" level=info msg="Removed pod sandbox: 7569a3bdef5014cbc03b063d10e08b938d1f75293180e66b81fc22ad24ee0252" id=34ffd497-2b3b-4af3-9744-20dd7d220b71 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.506248044Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=26f400c8-a0e1-4bfd-9f96-c5bde0f921c0 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.506884869Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b2ef170-636a-46af-b7ea-f05f0c88e4ee name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.509176795Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c79c3fe9-8a5d-49bf-a5cd-3f13be88daff name=/runtime.v1.ImageService/ImageStatus
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.518731263Z" level=info msg="Creating container: default/busybox/busybox" id=23e4900a-75ad-499d-a3f2-7f41314ef047 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.519076088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.527106975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.527644606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.544933506Z" level=info msg="Created container 90d50091d577dcc2f202944dde4634cdc4a8208967981b2aec03f4988b13d8e8: default/busybox/busybox" id=23e4900a-75ad-499d-a3f2-7f41314ef047 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.546610713Z" level=info msg="Starting container: 90d50091d577dcc2f202944dde4634cdc4a8208967981b2aec03f4988b13d8e8" id=cda5226a-50a6-4665-8622-9d948aa6e26a name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 12:19:58 addons-399470 crio[830]: time="2025-10-20T12:19:58.548337061Z" level=info msg="Started container" PID=5029 containerID=90d50091d577dcc2f202944dde4634cdc4a8208967981b2aec03f4988b13d8e8 description=default/busybox/busybox id=cda5226a-50a6-4665-8622-9d948aa6e26a name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd85c59d8eed4f104e811bf5f3a94408367b24c959bf1cd680c18d91b4f41886
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	90d50091d577d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          10 seconds ago       Running             busybox                                  0                   cd85c59d8eed4       busybox                                     default
	bf26f1feb82cc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	c1088bae9a808       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          17 seconds ago       Running             csi-provisioner                          0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	2ecff662c7508       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	3d5b3fe12ffc9       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	040ca55c52db6       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             20 seconds ago       Running             controller                               0                   0dddc6caee92f       ingress-nginx-controller-675c5ddd98-gljcq   ingress-nginx
	cc9b2f965f599       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 27 seconds ago       Running             gcp-auth                                 0                   abab1c0318c08       gcp-auth-78565c9fb4-k8ch2                   gcp-auth
	f3d67034b0eae       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            30 seconds ago       Running             gadget                                   0                   3d3926abfce8d       gadget-qgrgn                                gadget
	fe8a3095a471f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                34 seconds ago       Running             node-driver-registrar                    0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	a624518e6294a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              35 seconds ago       Running             registry-proxy                           0                   5b1914a1b577c       registry-proxy-btjgg                        kube-system
	079e485c9fbfe       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     38 seconds ago       Running             nvidia-device-plugin-ctr                 0                   3aec213078be0       nvidia-device-plugin-daemonset-q9xwr        kube-system
	cd2ca85339d1a       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             38 seconds ago       Exited              patch                                    2                   24075e10b491d       ingress-nginx-admission-patch-4xdfj         ingress-nginx
	03790aafd94f7       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           52 seconds ago       Running             registry                                 0                   87612f77f7eee       registry-6b586f9694-lvkpj                   kube-system
	97d5ef2a92aa8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   54 seconds ago       Exited              create                                   0                   eed72b2eb0a7d       ingress-nginx-admission-create-sf6cv        ingress-nginx
	5e2819fa3e373       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      55 seconds ago       Running             volume-snapshot-controller               0                   211267160f5e9       snapshot-controller-7d9fbc56b8-gl2q6        kube-system
	8df71fb091362       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      55 seconds ago       Running             volume-snapshot-controller               0                   0a75d06d73a7d       snapshot-controller-7d9fbc56b8-9l4l2        kube-system
	b021b5e25fa3c       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              57 seconds ago       Running             yakd                                     0                   de41a8a499d54       yakd-dashboard-5ff678cb9-xk78f              yakd-dashboard
	d0148fcb0cd20       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   f0fbc4b7fafd7       csi-hostpath-resizer-0                      kube-system
	f56add90136c7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   3ec422533f057       csi-hostpathplugin-zhlps                    kube-system
	fee29d2ac0336       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   01b209fade911       local-path-provisioner-648f6765c9-8dnp7     local-path-storage
	65f042711da86       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   8c96751ff53bc       csi-hostpath-attacher-0                     kube-system
	f8f46e656fa1f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   825108adc2769       metrics-server-85b7d694d7-5rpk5             kube-system
	1576d096039be       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   5720a3f6f402f       kube-ingress-dns-minikube                   kube-system
	6e37e09210384       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   3fbbd3cf6d311       cloud-spanner-emulator-86bd5cbb97-lp67l     default
	1331a3ab9aa84       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   ba7b97887c0bb       storage-provisioner                         kube-system
	9d231cda83b6a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   bef13ab8dc885       coredns-66bc5c9577-p2nl7                    kube-system
	1e45f17a364d1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   f4e19e684f48d       kindnet-s7r92                               kube-system
	559bae86282f4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   f323733abfd62       kube-proxy-vt5tz                            kube-system
	20bd22af6ef5b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   82fdfbc19dee1       kube-controller-manager-addons-399470       kube-system
	70cb33ebef465       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   c52909663e5aa       etcd-addons-399470                          kube-system
	cb73c63d85142       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   a929f24cbe9b0       kube-scheduler-addons-399470                kube-system
	e7b4d0b02797f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   2d21c6ef8dd4d       kube-apiserver-addons-399470                kube-system
	
	
	==> coredns [9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842] <==
	[INFO] 10.244.0.17:55226 - 45080 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073651s
	[INFO] 10.244.0.17:55226 - 55013 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004053656s
	[INFO] 10.244.0.17:55226 - 63680 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004640929s
	[INFO] 10.244.0.17:55226 - 56905 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000132539s
	[INFO] 10.244.0.17:55226 - 6071 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000088698s
	[INFO] 10.244.0.17:58418 - 24542 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162423s
	[INFO] 10.244.0.17:58418 - 24065 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000187785s
	[INFO] 10.244.0.17:52511 - 3388 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110139s
	[INFO] 10.244.0.17:52511 - 3593 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000112182s
	[INFO] 10.244.0.17:36782 - 19695 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084743s
	[INFO] 10.244.0.17:36782 - 19201 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078648s
	[INFO] 10.244.0.17:41207 - 41020 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004344597s
	[INFO] 10.244.0.17:41207 - 40766 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004318085s
	[INFO] 10.244.0.17:33849 - 19502 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132629s
	[INFO] 10.244.0.17:33849 - 19322 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000208396s
	[INFO] 10.244.0.21:56902 - 29850 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174861s
	[INFO] 10.244.0.21:38402 - 1059 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177192s
	[INFO] 10.244.0.21:54359 - 56094 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002262748s
	[INFO] 10.244.0.21:55804 - 64583 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001905255s
	[INFO] 10.244.0.21:54141 - 58 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000179317s
	[INFO] 10.244.0.21:56647 - 30890 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000361449s
	[INFO] 10.244.0.21:41928 - 54889 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003086373s
	[INFO] 10.244.0.21:51032 - 12661 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002142148s
	[INFO] 10.244.0.21:35780 - 33125 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002302922s
	[INFO] 10.244.0.21:42170 - 54059 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001874215s
	
	
	==> describe nodes <==
	Name:               addons-399470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-399470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=addons-399470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_17_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-399470
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-399470"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:17:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-399470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:19:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:19:38 +0000   Mon, 20 Oct 2025 12:17:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:19:38 +0000   Mon, 20 Oct 2025 12:17:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:19:38 +0000   Mon, 20 Oct 2025 12:17:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:19:38 +0000   Mon, 20 Oct 2025 12:18:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-399470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                8a403dc7-d68b-4de1-8372-5565f302155c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-lp67l      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gadget                      gadget-qgrgn                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  gcp-auth                    gcp-auth-78565c9fb4-k8ch2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-gljcq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m1s
	  kube-system                 coredns-66bc5c9577-p2nl7                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m6s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 csi-hostpathplugin-zhlps                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 etcd-addons-399470                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m12s
	  kube-system                 kindnet-s7r92                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m7s
	  kube-system                 kube-apiserver-addons-399470                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-controller-manager-addons-399470        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-vt5tz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-scheduler-addons-399470                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 metrics-server-85b7d694d7-5rpk5              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m2s
	  kube-system                 nvidia-device-plugin-daemonset-q9xwr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 registry-6b586f9694-lvkpj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 registry-creds-764b6fb674-n7sjp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 registry-proxy-btjgg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 snapshot-controller-7d9fbc56b8-9l4l2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-gl2q6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  local-path-storage          local-path-provisioner-648f6765c9-8dnp7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xk78f               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m5s   kube-proxy       
	  Normal   Starting                 2m12s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m12s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m12s  kubelet          Node addons-399470 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m12s  kubelet          Node addons-399470 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m12s  kubelet          Node addons-399470 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m8s   node-controller  Node addons-399470 event: Registered Node addons-399470 in Controller
	  Normal   NodeReady                85s    kubelet          Node addons-399470 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct20 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016790] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.502629] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033585] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.794361] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.786595] kauditd_printk_skb: 36 callbacks suppressed
	[Oct20 11:29] hrtimer: interrupt took 3085842 ns
	[Oct20 12:16] kauditd_printk_skb: 8 callbacks suppressed
	[Oct20 12:17] overlayfs: idmapped layers are currently not supported
	[  +0.065938] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609] <==
	{"level":"warn","ts":"2025-10-20T12:17:52.694206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.711459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.737567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.776116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.785072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.796249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.820841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.837296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.850025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.864521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.881163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.904413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.918539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.934736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.954279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:52.979543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:53.004534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:53.041244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:17:53.136658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:08.762153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:08.779257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.844908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.858896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.888490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:18:30.902570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45576","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [cc9b2f965f599a7998cd982e5ff841803f0f76d5bf010b3a8798797b82c32bba] <==
	2025/10/20 12:19:41 GCP Auth Webhook started!
	2025/10/20 12:19:55 Ready to marshal response ...
	2025/10/20 12:19:55 Ready to write response ...
	2025/10/20 12:19:56 Ready to marshal response ...
	2025/10/20 12:19:56 Ready to write response ...
	2025/10/20 12:19:56 Ready to marshal response ...
	2025/10/20 12:19:56 Ready to write response ...
	
	
	==> kernel <==
	 12:20:08 up  2:02,  0 user,  load average: 2.37, 2.75, 3.10
	Linux addons-399470 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01] <==
	E1020 12:18:33.020026       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 12:18:33.101750       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 12:18:33.101807       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 12:18:33.101865       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1020 12:18:34.202234       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 12:18:34.202408       1 metrics.go:72] Registering metrics
	I1020 12:18:34.202486       1 controller.go:711] "Syncing nftables rules"
	I1020 12:18:43.021448       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:18:43.021487       1 main.go:301] handling current node
	I1020 12:18:53.021172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:18:53.021205       1 main.go:301] handling current node
	I1020 12:19:03.019598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:19:03.019642       1 main.go:301] handling current node
	I1020 12:19:13.019628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:19:13.019658       1 main.go:301] handling current node
	I1020 12:19:23.024509       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:19:23.024540       1 main.go:301] handling current node
	I1020 12:19:33.019156       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:19:33.019204       1 main.go:301] handling current node
	I1020 12:19:43.019793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:19:43.019870       1 main.go:301] handling current node
	I1020 12:19:53.020470       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:19:53.020501       1 main.go:301] handling current node
	I1020 12:20:03.020472       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:20:03.020613       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c] <==
	E1020 12:19:11.936304       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1020 12:19:11.936691       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.72.218:443: connect: connection refused" logger="UnhandledError"
	E1020 12:19:11.937994       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.72.218:443: connect: connection refused" logger="UnhandledError"
	W1020 12:19:12.936975       1 handler_proxy.go:99] no RequestInfo found in the context
	W1020 12:19:12.936980       1 handler_proxy.go:99] no RequestInfo found in the context
	E1020 12:19:12.937050       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1020 12:19:12.937063       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1020 12:19:12.937065       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1020 12:19:12.938077       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1020 12:19:16.952809       1 handler_proxy.go:99] no RequestInfo found in the context
	E1020 12:19:16.952886       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1020 12:19:16.953013       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.72.218:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1020 12:19:17.001474       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1020 12:20:06.439594       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37652: use of closed network connection
	E1020 12:20:06.687219       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37690: use of closed network connection
	E1020 12:20:06.821641       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37696: use of closed network connection
	
	
	==> kube-controller-manager [20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b] <==
	I1020 12:18:00.870681       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 12:18:00.870696       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 12:18:00.872053       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 12:18:00.873261       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:18:00.872180       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:18:00.872250       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:18:00.873347       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:18:00.872133       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:18:00.880606       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1020 12:18:00.880678       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1020 12:18:00.880698       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1020 12:18:00.880714       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1020 12:18:00.880720       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1020 12:18:00.890750       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-399470" podCIDRs=["10.244.0.0/24"]
	E1020 12:18:06.618323       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1020 12:18:30.838094       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1020 12:18:30.838247       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1020 12:18:30.838285       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1020 12:18:30.869232       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1020 12:18:30.873442       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1020 12:18:30.938435       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:18:30.973790       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 12:18:45.808706       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1020 12:19:00.944769       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1020 12:19:00.987240       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33] <==
	I1020 12:18:02.750469       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:18:02.834256       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:18:02.937576       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:18:02.937616       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1020 12:18:02.937690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:18:03.054576       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:18:03.054628       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:18:03.070125       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:18:03.070440       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:18:03.070456       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:18:03.073844       1 config.go:200] "Starting service config controller"
	I1020 12:18:03.074892       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:18:03.074922       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:18:03.074927       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:18:03.074950       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:18:03.074954       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:18:03.075712       1 config.go:309] "Starting node config controller"
	I1020 12:18:03.075775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:18:03.075806       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:18:03.176260       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:18:03.176305       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:18:03.176318       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c] <==
	E1020 12:17:53.972494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:17:53.972727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:17:53.972841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:17:53.973007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:17:53.973105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:17:53.973238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:17:53.973323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:17:53.973721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:17:53.973832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:17:53.976547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:17:53.976707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:17:53.976865       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:17:53.976963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:17:53.977326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:17:53.977390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:17:54.781842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:17:54.872416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 12:17:54.874852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:17:54.918648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:17:54.924398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1020 12:17:54.961714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 12:17:55.014892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:17:55.043702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:17:55.121521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1020 12:17:57.440410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:19:31 addons-399470 kubelet[1290]: I1020 12:19:31.355737    1290 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ba6bd9-b322-4b84-8fc8-2e28557631aa-kube-api-access-psnwt" (OuterVolumeSpecName: "kube-api-access-psnwt") pod "96ba6bd9-b322-4b84-8fc8-2e28557631aa" (UID: "96ba6bd9-b322-4b84-8fc8-2e28557631aa"). InnerVolumeSpecName "kube-api-access-psnwt". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 20 12:19:31 addons-399470 kubelet[1290]: I1020 12:19:31.453949    1290 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-psnwt\" (UniqueName: \"kubernetes.io/projected/96ba6bd9-b322-4b84-8fc8-2e28557631aa-kube-api-access-psnwt\") on node \"addons-399470\" DevicePath \"\""
	Oct 20 12:19:32 addons-399470 kubelet[1290]: I1020 12:19:32.142584    1290 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24075e10b491d295dcc7e835d777146e70eae4bfd3a1a5a73163c1b79c2e8148"
	Oct 20 12:19:33 addons-399470 kubelet[1290]: I1020 12:19:33.148893    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-btjgg" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:19:34 addons-399470 kubelet[1290]: I1020 12:19:34.055384    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-btjgg" podStartSLOduration=3.018823288 podStartE2EDuration="51.055360196s" podCreationTimestamp="2025-10-20 12:18:43 +0000 UTC" firstStartedPulling="2025-10-20 12:18:44.873357734 +0000 UTC m=+48.485845897" lastFinishedPulling="2025-10-20 12:19:32.909894642 +0000 UTC m=+96.522382805" observedRunningTime="2025-10-20 12:19:33.164454454 +0000 UTC m=+96.776942616" watchObservedRunningTime="2025-10-20 12:19:34.055360196 +0000 UTC m=+97.667848441"
	Oct 20 12:19:34 addons-399470 kubelet[1290]: I1020 12:19:34.154263    1290 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-btjgg" secret="" err="secret \"gcp-auth\" not found"
	Oct 20 12:19:34 addons-399470 kubelet[1290]: I1020 12:19:34.503779    1290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a984e884-fbe3-4f5e-b27f-c510123ad06b" path="/var/lib/kubelet/pods/a984e884-fbe3-4f5e-b27f-c510123ad06b/volumes"
	Oct 20 12:19:40 addons-399470 kubelet[1290]: I1020 12:19:40.516632    1290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b893c821-f58d-4771-9bfe-3f35399018a8" path="/var/lib/kubelet/pods/b893c821-f58d-4771-9bfe-3f35399018a8/volumes"
	Oct 20 12:19:42 addons-399470 kubelet[1290]: I1020 12:19:42.255701    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-qgrgn" podStartSLOduration=69.177105513 podStartE2EDuration="1m35.255682084s" podCreationTimestamp="2025-10-20 12:18:07 +0000 UTC" firstStartedPulling="2025-10-20 12:19:12.072015501 +0000 UTC m=+75.684503664" lastFinishedPulling="2025-10-20 12:19:38.150591974 +0000 UTC m=+101.763080235" observedRunningTime="2025-10-20 12:19:39.228250027 +0000 UTC m=+102.840738198" watchObservedRunningTime="2025-10-20 12:19:42.255682084 +0000 UTC m=+105.868170247"
	Oct 20 12:19:42 addons-399470 kubelet[1290]: I1020 12:19:42.282311    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-k8ch2" podStartSLOduration=66.187533197 podStartE2EDuration="1m31.282278875s" podCreationTimestamp="2025-10-20 12:18:11 +0000 UTC" firstStartedPulling="2025-10-20 12:19:16.058279632 +0000 UTC m=+79.670767795" lastFinishedPulling="2025-10-20 12:19:41.15302522 +0000 UTC m=+104.765513473" observedRunningTime="2025-10-20 12:19:42.281631236 +0000 UTC m=+105.894119415" watchObservedRunningTime="2025-10-20 12:19:42.282278875 +0000 UTC m=+105.894767038"
	Oct 20 12:19:48 addons-399470 kubelet[1290]: E1020 12:19:48.330904    1290 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 20 12:19:48 addons-399470 kubelet[1290]: E1020 12:19:48.332521    1290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0fec4edc-d24b-4dc6-889b-5f70e34b4061-gcr-creds podName:0fec4edc-d24b-4dc6-889b-5f70e34b4061 nodeName:}" failed. No retries permitted until 2025-10-20 12:20:52.331820911 +0000 UTC m=+175.944309263 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0fec4edc-d24b-4dc6-889b-5f70e34b4061-gcr-creds") pod "registry-creds-764b6fb674-n7sjp" (UID: "0fec4edc-d24b-4dc6-889b-5f70e34b4061") : secret "registry-creds-gcr" not found
	Oct 20 12:19:49 addons-399470 kubelet[1290]: I1020 12:19:49.739810    1290 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 20 12:19:49 addons-399470 kubelet[1290]: I1020 12:19:49.739863    1290 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 20 12:19:53 addons-399470 kubelet[1290]: I1020 12:19:53.322009    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-zhlps" podStartSLOduration=1.4900004660000001 podStartE2EDuration="1m10.321991342s" podCreationTimestamp="2025-10-20 12:18:43 +0000 UTC" firstStartedPulling="2025-10-20 12:18:44.250664298 +0000 UTC m=+47.863152461" lastFinishedPulling="2025-10-20 12:19:53.082655174 +0000 UTC m=+116.695143337" observedRunningTime="2025-10-20 12:19:53.320432323 +0000 UTC m=+116.932920494" watchObservedRunningTime="2025-10-20 12:19:53.321991342 +0000 UTC m=+116.934479505"
	Oct 20 12:19:53 addons-399470 kubelet[1290]: I1020 12:19:53.322748    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-gljcq" podStartSLOduration=74.517978837 podStartE2EDuration="1m46.322736574s" podCreationTimestamp="2025-10-20 12:18:07 +0000 UTC" firstStartedPulling="2025-10-20 12:19:16.080809187 +0000 UTC m=+79.693297350" lastFinishedPulling="2025-10-20 12:19:47.885566925 +0000 UTC m=+111.498055087" observedRunningTime="2025-10-20 12:19:48.295044135 +0000 UTC m=+111.907532306" watchObservedRunningTime="2025-10-20 12:19:53.322736574 +0000 UTC m=+116.935224737"
	Oct 20 12:19:56 addons-399470 kubelet[1290]: I1020 12:19:56.305267    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/be8fcf68-f08d-4336-8e61-4abda92125cd-gcp-creds\") pod \"busybox\" (UID: \"be8fcf68-f08d-4336-8e61-4abda92125cd\") " pod="default/busybox"
	Oct 20 12:19:56 addons-399470 kubelet[1290]: I1020 12:19:56.305850    1290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrcv5\" (UniqueName: \"kubernetes.io/projected/be8fcf68-f08d-4336-8e61-4abda92125cd-kube-api-access-xrcv5\") pod \"busybox\" (UID: \"be8fcf68-f08d-4336-8e61-4abda92125cd\") " pod="default/busybox"
	Oct 20 12:19:56 addons-399470 kubelet[1290]: I1020 12:19:56.559730    1290 scope.go:117] "RemoveContainer" containerID="d101a20d45aee17c984a2731a6a2510c03bd01e4d0c2d68cf9d1e460f27e12b6"
	Oct 20 12:19:56 addons-399470 kubelet[1290]: I1020 12:19:56.595072    1290 scope.go:117] "RemoveContainer" containerID="dc58967e7604bb9c907040bd795ba2de3e5df0f40c50d8388de65ce52020146d"
	Oct 20 12:19:56 addons-399470 kubelet[1290]: E1020 12:19:56.691115    1290 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e56a65d08467b4fb8c54dc3a4231378a99fb55d0d4ef0569b5d45e57ed9a9992/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e56a65d08467b4fb8c54dc3a4231378a99fb55d0d4ef0569b5d45e57ed9a9992/diff: no such file or directory, extraDiskErr: <nil>
	Oct 20 12:19:56 addons-399470 kubelet[1290]: E1020 12:19:56.722025    1290 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/13512ab65dd904fe494cdccfc006d0eb30582beba8555ab650f4e4ffcb6a8bf0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/13512ab65dd904fe494cdccfc006d0eb30582beba8555ab650f4e4ffcb6a8bf0/diff: no such file or directory, extraDiskErr: <nil>
	Oct 20 12:20:00 addons-399470 kubelet[1290]: I1020 12:20:00.354335    1290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.408404483 podStartE2EDuration="4.354301256s" podCreationTimestamp="2025-10-20 12:19:56 +0000 UTC" firstStartedPulling="2025-10-20 12:19:56.561844993 +0000 UTC m=+120.174333156" lastFinishedPulling="2025-10-20 12:19:58.507741766 +0000 UTC m=+122.120229929" observedRunningTime="2025-10-20 12:19:59.341646198 +0000 UTC m=+122.954134377" watchObservedRunningTime="2025-10-20 12:20:00.354301256 +0000 UTC m=+123.966789419"
	Oct 20 12:20:06 addons-399470 kubelet[1290]: E1020 12:20:06.436266    1290 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:59916->127.0.0.1:37745: read tcp 127.0.0.1:59916->127.0.0.1:37745: read: connection reset by peer
	Oct 20 12:20:06 addons-399470 kubelet[1290]: E1020 12:20:06.821194    1290 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59934->127.0.0.1:37745: write tcp 127.0.0.1:59934->127.0.0.1:37745: write: broken pipe
	
	
	==> storage-provisioner [1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b] <==
	W1020 12:19:44.888464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:46.891077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:46.901804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:48.905970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:48.911886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:50.914936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:50.921934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:52.925409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:52.930691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:54.934215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:54.941684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:56.945240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:56.960718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:58.963438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:19:58.968084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:00.971621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:00.977623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:02.980550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:02.988027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:04.991345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:04.995905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:07.000266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:07.009587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:09.013412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:20:09.021807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-399470 -n addons-399470
helpers_test.go:269: (dbg) Run:  kubectl --context addons-399470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj registry-creds-764b6fb674-n7sjp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-399470 describe pod ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj registry-creds-764b6fb674-n7sjp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-399470 describe pod ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj registry-creds-764b6fb674-n7sjp: exit status 1 (84.059336ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sf6cv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4xdfj" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-n7sjp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-399470 describe pod ingress-nginx-admission-create-sf6cv ingress-nginx-admission-patch-4xdfj registry-creds-764b6fb674-n7sjp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable headlamp --alsologtostderr -v=1: exit status 11 (275.09579ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:10.013764  305671 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:10.017559  305671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:10.017637  305671 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:10.017660  305671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:10.018100  305671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:10.018700  305671 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:10.019349  305671 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:10.019418  305671 addons.go:606] checking whether the cluster is paused
	I1020 12:20:10.019590  305671 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:10.019651  305671 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:10.020301  305671 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:10.042063  305671 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:10.042262  305671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:10.062211  305671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:10.175786  305671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:10.175872  305671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:10.206412  305671 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:10.206435  305671 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:10.206440  305671 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:10.206444  305671 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:10.206447  305671 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:10.206451  305671 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:10.206454  305671 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:10.206457  305671 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:10.206460  305671 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:10.206466  305671 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:10.206469  305671 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:10.206472  305671 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:10.206475  305671 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:10.206479  305671 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:10.206487  305671 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:10.206499  305671 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:10.206502  305671 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:10.206505  305671 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:10.206509  305671 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:10.206512  305671 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:10.206517  305671 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:10.206520  305671 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:10.206523  305671 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:10.206527  305671 cri.go:89] found id: ""
	I1020 12:20:10.206579  305671 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:10.221040  305671 out.go:203] 
	W1020 12:20:10.223831  305671 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:10.223858  305671 out.go:285] * 
	* 
	W1020 12:20:10.230337  305671 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:10.233286  305671 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-lp67l" [ca48ae2e-d32d-4010-b3a4-5c4f7658b1e9] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003677002s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (263.337141ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:21:16.210396  307613 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:21:16.211218  307613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:21:16.211259  307613 out.go:374] Setting ErrFile to fd 2...
	I1020 12:21:16.211280  307613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:21:16.211588  307613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:21:16.211961  307613 mustload.go:65] Loading cluster: addons-399470
	I1020 12:21:16.212428  307613 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:21:16.212469  307613 addons.go:606] checking whether the cluster is paused
	I1020 12:21:16.212616  307613 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:21:16.212655  307613 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:21:16.213159  307613 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:21:16.234951  307613 ssh_runner.go:195] Run: systemctl --version
	I1020 12:21:16.235011  307613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:21:16.252619  307613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:21:16.359022  307613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:21:16.359111  307613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:21:16.387950  307613 cri.go:89] found id: "6cc4aebaa824e21d483adcf9e349672b94a920537acebb51e10c4d48220d1546"
	I1020 12:21:16.387980  307613 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:21:16.387985  307613 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:21:16.387989  307613 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:21:16.387993  307613 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:21:16.387996  307613 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:21:16.387999  307613 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:21:16.388002  307613 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:21:16.388006  307613 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:21:16.388012  307613 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:21:16.388016  307613 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:21:16.388020  307613 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:21:16.388024  307613 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:21:16.388028  307613 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:21:16.388036  307613 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:21:16.388041  307613 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:21:16.388045  307613 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:21:16.388049  307613 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:21:16.388052  307613 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:21:16.388055  307613 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:21:16.388060  307613 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:21:16.388066  307613 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:21:16.388070  307613 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:21:16.388073  307613 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:21:16.388077  307613 cri.go:89] found id: ""
	I1020 12:21:16.388129  307613 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:21:16.405031  307613 out.go:203] 
	W1020 12:21:16.407825  307613 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:21:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:21:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:21:16.407875  307613 out.go:285] * 
	* 
	W1020 12:21:16.414811  307613 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:21:16.417572  307613 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-399470 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-399470 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-399470 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [dc2a46ff-84f0-4f31-be05-88899bf221e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [dc2a46ff-84f0-4f31-be05-88899bf221e2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [dc2a46ff-84f0-4f31-be05-88899bf221e2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003804327s
addons_test.go:967: (dbg) Run:  kubectl --context addons-399470 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 ssh "cat /opt/local-path-provisioner/pvc-806b0eb2-ecde-4da7-8807-a7df9f295882_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-399470 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-399470 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (293.829638ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:21:09.903532  307494 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:21:09.904333  307494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:21:09.904410  307494 out.go:374] Setting ErrFile to fd 2...
	I1020 12:21:09.904437  307494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:21:09.904753  307494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:21:09.905097  307494 mustload.go:65] Loading cluster: addons-399470
	I1020 12:21:09.905517  307494 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:21:09.905563  307494 addons.go:606] checking whether the cluster is paused
	I1020 12:21:09.905688  307494 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:21:09.905726  307494 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:21:09.906195  307494 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:21:09.923704  307494 ssh_runner.go:195] Run: systemctl --version
	I1020 12:21:09.923771  307494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:21:09.941142  307494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:21:10.051885  307494 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:21:10.051995  307494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:21:10.087826  307494 cri.go:89] found id: "dcbc088716d20112aa42df20f2a8fcf0bf9947d82110e1ef58b1516520f825b1"
	I1020 12:21:10.087849  307494 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:21:10.087855  307494 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:21:10.087859  307494 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:21:10.087862  307494 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:21:10.087866  307494 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:21:10.087869  307494 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:21:10.087872  307494 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:21:10.087876  307494 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:21:10.087885  307494 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:21:10.087889  307494 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:21:10.087892  307494 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:21:10.087895  307494 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:21:10.087898  307494 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:21:10.087901  307494 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:21:10.087911  307494 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:21:10.087918  307494 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:21:10.087922  307494 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:21:10.087926  307494 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:21:10.087930  307494 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:21:10.087935  307494 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:21:10.087938  307494 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:21:10.087941  307494 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:21:10.087945  307494 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:21:10.087948  307494 cri.go:89] found id: ""
	I1020 12:21:10.088011  307494 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:21:10.122704  307494 out.go:203] 
	W1020 12:21:10.125877  307494 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:21:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:21:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:21:10.125966  307494 out.go:285] * 
	* 
	W1020 12:21:10.132664  307494 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:21:10.136169  307494 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-q9xwr" [efbab45c-2225-4671-994c-713803dfe77d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003346384s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (278.495759ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:20:55.170713  307039 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:20:55.171550  307039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:55.171570  307039 out.go:374] Setting ErrFile to fd 2...
	I1020 12:20:55.171577  307039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:20:55.171861  307039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:20:55.172225  307039 mustload.go:65] Loading cluster: addons-399470
	I1020 12:20:55.172685  307039 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:55.172707  307039 addons.go:606] checking whether the cluster is paused
	I1020 12:20:55.172829  307039 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:20:55.172854  307039 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:20:55.173390  307039 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:20:55.192476  307039 ssh_runner.go:195] Run: systemctl --version
	I1020 12:20:55.192539  307039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:20:55.217411  307039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:20:55.327164  307039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:20:55.327262  307039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:20:55.359033  307039 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:20:55.359060  307039 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:20:55.359065  307039 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:20:55.359070  307039 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:20:55.359073  307039 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:20:55.359078  307039 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:20:55.359081  307039 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:20:55.359110  307039 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:20:55.359115  307039 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:20:55.359121  307039 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:20:55.359130  307039 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:20:55.359134  307039 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:20:55.359138  307039 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:20:55.359141  307039 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:20:55.359144  307039 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:20:55.359149  307039 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:20:55.359156  307039 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:20:55.359160  307039 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:20:55.359163  307039 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:20:55.359166  307039 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:20:55.359185  307039 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:20:55.359196  307039 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:20:55.359200  307039 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:20:55.359203  307039 cri.go:89] found id: ""
	I1020 12:20:55.359267  307039 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:20:55.374229  307039 out.go:203] 
	W1020 12:20:55.377065  307039 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:20:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:20:55.377089  307039 out.go:285] * 
	* 
	W1020 12:20:55.383574  307039 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:20:55.386437  307039 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xk78f" [0543be38-d264-4c93-9166-cdc5669609ab] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003625033s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-399470 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-399470 addons disable yakd --alsologtostderr -v=1: exit status 11 (284.149741ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:21:01.449163  307182 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:21:01.450001  307182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:21:01.450050  307182 out.go:374] Setting ErrFile to fd 2...
	I1020 12:21:01.450072  307182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:21:01.450415  307182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:21:01.450788  307182 mustload.go:65] Loading cluster: addons-399470
	I1020 12:21:01.451208  307182 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:21:01.451257  307182 addons.go:606] checking whether the cluster is paused
	I1020 12:21:01.451396  307182 config.go:182] Loaded profile config "addons-399470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:21:01.451439  307182 host.go:66] Checking if "addons-399470" exists ...
	I1020 12:21:01.451954  307182 cli_runner.go:164] Run: docker container inspect addons-399470 --format={{.State.Status}}
	I1020 12:21:01.472229  307182 ssh_runner.go:195] Run: systemctl --version
	I1020 12:21:01.472295  307182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-399470
	I1020 12:21:01.491200  307182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/addons-399470/id_rsa Username:docker}
	I1020 12:21:01.594974  307182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:21:01.595123  307182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:21:01.625946  307182 cri.go:89] found id: "dcbc088716d20112aa42df20f2a8fcf0bf9947d82110e1ef58b1516520f825b1"
	I1020 12:21:01.625977  307182 cri.go:89] found id: "2bffcd469712d41bb9c2462377c8c3c8ff54c1fec6c040f18cf0647407bb61f0"
	I1020 12:21:01.625982  307182 cri.go:89] found id: "bf26f1feb82cca026125f8fcfaaccf29d763a96a5c3ea15f3b094798db9fe804"
	I1020 12:21:01.625986  307182 cri.go:89] found id: "c1088bae9a80855abb60a6a0d5d690ce9955ff5525c976e70ff681f08fb570a2"
	I1020 12:21:01.625990  307182 cri.go:89] found id: "2ecff662c7508ec6ce463093fe8546b0064fdb01e39d47ca6c20e85f241269cf"
	I1020 12:21:01.625993  307182 cri.go:89] found id: "3d5b3fe12ffc908b91c7a1ef5f2123d597c347ebaf80042b648c81d7767125e6"
	I1020 12:21:01.625998  307182 cri.go:89] found id: "fe8a3095a471f2d3d3cd12d668752024dc6143fd3da52e793c908884f332fb54"
	I1020 12:21:01.626001  307182 cri.go:89] found id: "a624518e6294ab0b5250bfdec35c3cd6fa8d43d83d35b74b6319ea12e3f8da5b"
	I1020 12:21:01.626008  307182 cri.go:89] found id: "079e485c9fbfe87544cb7ad3162db07a8e5702cbca3cf383d80a9dc306c06957"
	I1020 12:21:01.626014  307182 cri.go:89] found id: "03790aafd94f7b7f6c1f49cba7d70934407e8ef83870bf5a4a4420caac9d18e1"
	I1020 12:21:01.626018  307182 cri.go:89] found id: "5e2819fa3e373756e444fca66b18b4181bc08d61ebca0b767b15ca735d5b0caf"
	I1020 12:21:01.626021  307182 cri.go:89] found id: "8df71fb091362fba5591121b1ad252a5c184ed29aaf6cb17f2811af7994c0452"
	I1020 12:21:01.626024  307182 cri.go:89] found id: "d0148fcb0cd20abd736896c266850aeb8eb44492c2571c3ac4fae5f73a20ec4a"
	I1020 12:21:01.626027  307182 cri.go:89] found id: "f56add90136c740432536e4d34e59f44217821384d26a2f820bddbb072f6f270"
	I1020 12:21:01.626031  307182 cri.go:89] found id: "65f042711da869b4b7ef6e84f29dc970a7267f40677a1fee9bd1f9f6819e49c2"
	I1020 12:21:01.626035  307182 cri.go:89] found id: "f8f46e656fa1f79fb83cb62c841e3b6a61e8334f09a92fba901ca2c3fdba988a"
	I1020 12:21:01.626044  307182 cri.go:89] found id: "1576d096039bee25fd546b1b527355c132bc24e201b3fc4f754458aa73d1e80a"
	I1020 12:21:01.626048  307182 cri.go:89] found id: "1331a3ab9aa84354cb6ac7f3d5e2e687a039f5c64a4cb0361ff7a4127636010b"
	I1020 12:21:01.626051  307182 cri.go:89] found id: "9d231cda83b6ab82a7d3216515d757e69a16811c94687cd031e5cbd73963b842"
	I1020 12:21:01.626054  307182 cri.go:89] found id: "1e45f17a364d1ed1018a5dcf30bc786032388365dd52e16db2c16ceddd114f01"
	I1020 12:21:01.626058  307182 cri.go:89] found id: "559bae86282f40c13db5955d039339ec6baa8181631ba23051d0690219a89a33"
	I1020 12:21:01.626061  307182 cri.go:89] found id: "20bd22af6ef5beb408945a48db5663eeafe88acdf9fc851bbb4ec2d3cdb0729b"
	I1020 12:21:01.626064  307182 cri.go:89] found id: "70cb33ebef46560de0c99885a8e6b4d33aeca07907bc09031e9a394d991c7609"
	I1020 12:21:01.626067  307182 cri.go:89] found id: "cb73c63d851424d8a21745ed6139ec189814b071a7e5383148e384c0dfbc730c"
	I1020 12:21:01.626070  307182 cri.go:89] found id: "e7b4d0b02797f49bcd544a86f1c66894367ab12fd7206c8745c45f9625b1f16c"
	I1020 12:21:01.626073  307182 cri.go:89] found id: ""
	I1020 12:21:01.626130  307182 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 12:21:01.659114  307182 out.go:203] 
	W1020 12:21:01.662343  307182 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:21:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:21:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 12:21:01.662374  307182 out.go:285] * 
	* 
	W1020 12:21:01.668648  307182 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 12:21:01.674690  307182 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-399470 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-749689 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-749689 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9hrf6" [360212d5-91f2-44ee-bfa3-aff499ca3bd5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-749689 -n functional-749689
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-20 12:37:27.23802792 +0000 UTC m=+1238.276958286
functional_test.go:1645: (dbg) Run:  kubectl --context functional-749689 describe po hello-node-connect-7d85dfc575-9hrf6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-749689 describe po hello-node-connect-7d85dfc575-9hrf6 -n default:
Name:             hello-node-connect-7d85dfc575-9hrf6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-749689/192.168.49.2
Start Time:       Mon, 20 Oct 2025 12:27:26 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k9qn4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-k9qn4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9hrf6 to functional-749689
Normal   Pulling    7m7s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-749689 logs hello-node-connect-7d85dfc575-9hrf6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-749689 logs hello-node-connect-7d85dfc575-9hrf6 -n default: exit status 1 (101.53351ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9hrf6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-749689 logs hello-node-connect-7d85dfc575-9hrf6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-749689 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-9hrf6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-749689/192.168.49.2
Start Time:       Mon, 20 Oct 2025 12:27:26 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k9qn4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-k9qn4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9hrf6 to functional-749689
Normal   Pulling    7m7s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-749689 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-749689 logs -l app=hello-node-connect: exit status 1 (82.443546ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9hrf6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-749689 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-749689 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.224.65
IPs:                      10.105.224.65
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32676/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-749689
helpers_test.go:243: (dbg) docker inspect functional-749689:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fcb06c14443254b8b3c05628f6421dffcb80292b77e110d167bc22a1692eacc9",
	        "Created": "2025-10-20T12:24:22.13303959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T12:24:22.201248095Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/fcb06c14443254b8b3c05628f6421dffcb80292b77e110d167bc22a1692eacc9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fcb06c14443254b8b3c05628f6421dffcb80292b77e110d167bc22a1692eacc9/hostname",
	        "HostsPath": "/var/lib/docker/containers/fcb06c14443254b8b3c05628f6421dffcb80292b77e110d167bc22a1692eacc9/hosts",
	        "LogPath": "/var/lib/docker/containers/fcb06c14443254b8b3c05628f6421dffcb80292b77e110d167bc22a1692eacc9/fcb06c14443254b8b3c05628f6421dffcb80292b77e110d167bc22a1692eacc9-json.log",
	        "Name": "/functional-749689",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-749689:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-749689",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fcb06c14443254b8b3c05628f6421dffcb80292b77e110d167bc22a1692eacc9",
	                "LowerDir": "/var/lib/docker/overlay2/c9aa1e808cc4bc4cc6cc5433ee0ca5a33145c22d689f06853dbd72977d7576b6-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c9aa1e808cc4bc4cc6cc5433ee0ca5a33145c22d689f06853dbd72977d7576b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c9aa1e808cc4bc4cc6cc5433ee0ca5a33145c22d689f06853dbd72977d7576b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c9aa1e808cc4bc4cc6cc5433ee0ca5a33145c22d689f06853dbd72977d7576b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-749689",
	                "Source": "/var/lib/docker/volumes/functional-749689/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-749689",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-749689",
	                "name.minikube.sigs.k8s.io": "functional-749689",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "81486e7cb55762b9b1046e8d37af64593193ef5526027c8e1020621b69b47205",
	            "SandboxKey": "/var/run/docker/netns/81486e7cb557",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-749689": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:48:cb:8d:53:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eed7ad339ab11ad0ed61a42d59c12be8e1c9b38bf726c7069b0efa673b7a989b",
	                    "EndpointID": "7899e2511fd2cfce60679c4564c396cc3dd162180a7d4d0bf6ae8843f2a94543",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-749689",
	                        "fcb06c144432"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-749689 -n functional-749689
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 logs -n 25: (1.460534975s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-749689 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:26 UTC │ 20 Oct 25 12:26 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 20 Oct 25 12:26 UTC │ 20 Oct 25 12:26 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 20 Oct 25 12:26 UTC │ 20 Oct 25 12:26 UTC │
	│ kubectl │ functional-749689 kubectl -- --context functional-749689 get pods                                                          │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:26 UTC │ 20 Oct 25 12:26 UTC │
	│ start   │ -p functional-749689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:26 UTC │ 20 Oct 25 12:27 UTC │
	│ service │ invalid-svc -p functional-749689                                                                                           │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │                     │
	│ cp      │ functional-749689 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ config  │ functional-749689 config unset cpus                                                                                        │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ config  │ functional-749689 config get cpus                                                                                          │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │                     │
	│ config  │ functional-749689 config set cpus 2                                                                                        │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ config  │ functional-749689 config get cpus                                                                                          │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ config  │ functional-749689 config unset cpus                                                                                        │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ ssh     │ functional-749689 ssh -n functional-749689 sudo cat /home/docker/cp-test.txt                                               │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ config  │ functional-749689 config get cpus                                                                                          │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │                     │
	│ ssh     │ functional-749689 ssh echo hello                                                                                           │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ cp      │ functional-749689 cp functional-749689:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3447331902/001/cp-test.txt │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ ssh     │ functional-749689 ssh cat /etc/hostname                                                                                    │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ ssh     │ functional-749689 ssh -n functional-749689 sudo cat /home/docker/cp-test.txt                                               │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ tunnel  │ functional-749689 tunnel --alsologtostderr                                                                                 │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │                     │
	│ tunnel  │ functional-749689 tunnel --alsologtostderr                                                                                 │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │                     │
	│ cp      │ functional-749689 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ ssh     │ functional-749689 ssh -n functional-749689 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ tunnel  │ functional-749689 tunnel --alsologtostderr                                                                                 │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │                     │
	│ addons  │ functional-749689 addons list                                                                                              │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	│ addons  │ functional-749689 addons list -o json                                                                                      │ functional-749689 │ jenkins │ v1.37.0 │ 20 Oct 25 12:27 UTC │ 20 Oct 25 12:27 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:26:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:26:12.110894  318162 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:26:12.111043  318162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:26:12.111050  318162 out.go:374] Setting ErrFile to fd 2...
	I1020 12:26:12.111054  318162 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:26:12.111334  318162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:26:12.111693  318162 out.go:368] Setting JSON to false
	I1020 12:26:12.112716  318162 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7723,"bootTime":1760955450,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 12:26:12.112778  318162 start.go:141] virtualization:  
	I1020 12:26:12.116517  318162 out.go:179] * [functional-749689] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 12:26:12.120510  318162 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:26:12.120659  318162 notify.go:220] Checking for updates...
	I1020 12:26:12.126756  318162 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:26:12.129755  318162 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:26:12.132457  318162 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 12:26:12.135335  318162 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 12:26:12.138142  318162 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:26:12.141439  318162 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:26:12.141544  318162 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:26:12.173927  318162 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 12:26:12.174046  318162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:26:12.234960  318162 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-20 12:26:12.226009413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:26:12.235053  318162 docker.go:318] overlay module found
	I1020 12:26:12.240005  318162 out.go:179] * Using the docker driver based on existing profile
	I1020 12:26:12.242818  318162 start.go:305] selected driver: docker
	I1020 12:26:12.242826  318162 start.go:925] validating driver "docker" against &{Name:functional-749689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-749689 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:26:12.242909  318162 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:26:12.243016  318162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:26:12.299938  318162 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-20 12:26:12.290887263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:26:12.300394  318162 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:26:12.300440  318162 cni.go:84] Creating CNI manager for ""
	I1020 12:26:12.300497  318162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:26:12.300545  318162 start.go:349] cluster config:
	{Name:functional-749689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-749689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:26:12.303822  318162 out.go:179] * Starting "functional-749689" primary control-plane node in "functional-749689" cluster
	I1020 12:26:12.306720  318162 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:26:12.309615  318162 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:26:12.312584  318162 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:26:12.312645  318162 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 12:26:12.312653  318162 cache.go:58] Caching tarball of preloaded images
	I1020 12:26:12.312671  318162 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:26:12.312809  318162 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 12:26:12.312818  318162 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 12:26:12.312946  318162 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/config.json ...
	I1020 12:26:12.332514  318162 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 12:26:12.332525  318162 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 12:26:12.332545  318162 cache.go:232] Successfully downloaded all kic artifacts
	I1020 12:26:12.332567  318162 start.go:360] acquireMachinesLock for functional-749689: {Name:mk684aff70a813824464ead26bd144f38b5809ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 12:26:12.332646  318162 start.go:364] duration metric: took 56.559µs to acquireMachinesLock for "functional-749689"
	I1020 12:26:12.332667  318162 start.go:96] Skipping create...Using existing machine configuration
	I1020 12:26:12.332671  318162 fix.go:54] fixHost starting: 
	I1020 12:26:12.332926  318162 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
	I1020 12:26:12.350087  318162 fix.go:112] recreateIfNeeded on functional-749689: state=Running err=<nil>
	W1020 12:26:12.350114  318162 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 12:26:12.353476  318162 out.go:252] * Updating the running docker "functional-749689" container ...
	I1020 12:26:12.353502  318162 machine.go:93] provisionDockerMachine start ...
	I1020 12:26:12.353590  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:12.379497  318162 main.go:141] libmachine: Using SSH client type: native
	I1020 12:26:12.379798  318162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1020 12:26:12.379804  318162 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 12:26:12.528348  318162 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-749689
	
	I1020 12:26:12.528396  318162 ubuntu.go:182] provisioning hostname "functional-749689"
	I1020 12:26:12.528468  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:12.547538  318162 main.go:141] libmachine: Using SSH client type: native
	I1020 12:26:12.547840  318162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1020 12:26:12.547849  318162 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-749689 && echo "functional-749689" | sudo tee /etc/hostname
	I1020 12:26:12.709569  318162 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-749689
	
	I1020 12:26:12.709637  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:12.728194  318162 main.go:141] libmachine: Using SSH client type: native
	I1020 12:26:12.728518  318162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1020 12:26:12.728535  318162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-749689' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-749689/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-749689' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 12:26:12.877111  318162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 12:26:12.877127  318162 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 12:26:12.877145  318162 ubuntu.go:190] setting up certificates
	I1020 12:26:12.877153  318162 provision.go:84] configureAuth start
	I1020 12:26:12.877220  318162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-749689
	I1020 12:26:12.895269  318162 provision.go:143] copyHostCerts
	I1020 12:26:12.895326  318162 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 12:26:12.895342  318162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 12:26:12.895414  318162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 12:26:12.895514  318162 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 12:26:12.895517  318162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 12:26:12.895541  318162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 12:26:12.895606  318162 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 12:26:12.895609  318162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 12:26:12.895632  318162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 12:26:12.895675  318162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.functional-749689 san=[127.0.0.1 192.168.49.2 functional-749689 localhost minikube]
	I1020 12:26:13.473160  318162 provision.go:177] copyRemoteCerts
	I1020 12:26:13.473221  318162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 12:26:13.473264  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:13.491840  318162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:26:13.600503  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 12:26:13.623181  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 12:26:13.641967  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 12:26:13.660577  318162 provision.go:87] duration metric: took 783.400217ms to configureAuth
	I1020 12:26:13.660600  318162 ubuntu.go:206] setting minikube options for container-runtime
	I1020 12:26:13.660800  318162 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:26:13.660918  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:13.678535  318162 main.go:141] libmachine: Using SSH client type: native
	I1020 12:26:13.678859  318162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1020 12:26:13.678870  318162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 12:26:19.054940  318162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 12:26:19.054952  318162 machine.go:96] duration metric: took 6.701443872s to provisionDockerMachine
	I1020 12:26:19.054962  318162 start.go:293] postStartSetup for "functional-749689" (driver="docker")
	I1020 12:26:19.054971  318162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 12:26:19.055030  318162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 12:26:19.055067  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:19.079345  318162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:26:19.184334  318162 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 12:26:19.187847  318162 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 12:26:19.187866  318162 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 12:26:19.187876  318162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 12:26:19.187930  318162 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 12:26:19.188008  318162 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 12:26:19.188082  318162 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/test/nested/copy/298259/hosts -> hosts in /etc/test/nested/copy/298259
	I1020 12:26:19.188124  318162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/298259
	I1020 12:26:19.195921  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 12:26:19.215133  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/test/nested/copy/298259/hosts --> /etc/test/nested/copy/298259/hosts (40 bytes)
	I1020 12:26:19.232558  318162 start.go:296] duration metric: took 177.581168ms for postStartSetup
	I1020 12:26:19.232639  318162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:26:19.232691  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:19.251061  318162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:26:19.353543  318162 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 12:26:19.358305  318162 fix.go:56] duration metric: took 7.025626942s for fixHost
	I1020 12:26:19.358320  318162 start.go:83] releasing machines lock for "functional-749689", held for 7.025666188s
	I1020 12:26:19.358387  318162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-749689
	I1020 12:26:19.375897  318162 ssh_runner.go:195] Run: cat /version.json
	I1020 12:26:19.375933  318162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 12:26:19.375939  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:19.375999  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:19.398839  318162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:26:19.414043  318162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:26:19.595882  318162 ssh_runner.go:195] Run: systemctl --version
	I1020 12:26:19.602286  318162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 12:26:19.638293  318162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 12:26:19.642671  318162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 12:26:19.642743  318162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 12:26:19.650586  318162 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 12:26:19.650600  318162 start.go:495] detecting cgroup driver to use...
	I1020 12:26:19.650630  318162 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 12:26:19.650676  318162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 12:26:19.666414  318162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 12:26:19.679798  318162 docker.go:218] disabling cri-docker service (if available) ...
	I1020 12:26:19.679862  318162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 12:26:19.695340  318162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 12:26:19.708513  318162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 12:26:19.846989  318162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 12:26:19.987858  318162 docker.go:234] disabling docker service ...
	I1020 12:26:19.987914  318162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 12:26:20.009744  318162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 12:26:20.024699  318162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 12:26:20.164622  318162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 12:26:20.305831  318162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 12:26:20.319447  318162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 12:26:20.333440  318162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 12:26:20.333508  318162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:26:20.342414  318162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 12:26:20.342473  318162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:26:20.351005  318162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:26:20.359968  318162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:26:20.368570  318162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 12:26:20.376438  318162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:26:20.385302  318162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:26:20.393445  318162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 12:26:20.402501  318162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 12:26:20.411061  318162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 12:26:20.418167  318162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:26:20.548646  318162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 12:26:28.996553  318162 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.447882573s)
	I1020 12:26:28.996580  318162 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 12:26:28.996634  318162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 12:26:29.001507  318162 start.go:563] Will wait 60s for crictl version
	I1020 12:26:29.001575  318162 ssh_runner.go:195] Run: which crictl
	I1020 12:26:29.005736  318162 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 12:26:29.035336  318162 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 12:26:29.035432  318162 ssh_runner.go:195] Run: crio --version
	I1020 12:26:29.062966  318162 ssh_runner.go:195] Run: crio --version
	I1020 12:26:29.093340  318162 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 12:26:29.096414  318162 cli_runner.go:164] Run: docker network inspect functional-749689 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 12:26:29.112518  318162 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1020 12:26:29.119671  318162 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1020 12:26:29.122553  318162 kubeadm.go:883] updating cluster {Name:functional-749689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-749689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 12:26:29.122675  318162 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:26:29.122754  318162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:26:29.157302  318162 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:26:29.157313  318162 crio.go:433] Images already preloaded, skipping extraction
	I1020 12:26:29.157368  318162 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 12:26:29.185383  318162 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 12:26:29.185394  318162 cache_images.go:85] Images are preloaded, skipping loading
	I1020 12:26:29.185401  318162 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1020 12:26:29.185499  318162 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-749689 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-749689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 12:26:29.185579  318162 ssh_runner.go:195] Run: crio config
	I1020 12:26:29.246374  318162 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1020 12:26:29.246428  318162 cni.go:84] Creating CNI manager for ""
	I1020 12:26:29.246436  318162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:26:29.246448  318162 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 12:26:29.246471  318162 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-749689 NodeName:functional-749689 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 12:26:29.246593  318162 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-749689"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 12:26:29.246662  318162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 12:26:29.254520  318162 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 12:26:29.254587  318162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 12:26:29.262094  318162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 12:26:29.274844  318162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 12:26:29.287620  318162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1020 12:26:29.300061  318162 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1020 12:26:29.303858  318162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:26:29.434944  318162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:26:29.449565  318162 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689 for IP: 192.168.49.2
	I1020 12:26:29.449577  318162 certs.go:195] generating shared ca certs ...
	I1020 12:26:29.449591  318162 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:26:29.449726  318162 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 12:26:29.449765  318162 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 12:26:29.449770  318162 certs.go:257] generating profile certs ...
	I1020 12:26:29.449843  318162 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.key
	I1020 12:26:29.449889  318162 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/apiserver.key.d88c72d2
	I1020 12:26:29.449924  318162 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/proxy-client.key
	I1020 12:26:29.450038  318162 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 12:26:29.450066  318162 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 12:26:29.450077  318162 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 12:26:29.450100  318162 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 12:26:29.450121  318162 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 12:26:29.450141  318162 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 12:26:29.450179  318162 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 12:26:29.450751  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 12:26:29.470084  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 12:26:29.488132  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 12:26:29.506066  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 12:26:29.523907  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 12:26:29.542687  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 12:26:29.560134  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 12:26:29.578168  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 12:26:29.595781  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 12:26:29.613505  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 12:26:29.630874  318162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 12:26:29.648716  318162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 12:26:29.661601  318162 ssh_runner.go:195] Run: openssl version
	I1020 12:26:29.667805  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 12:26:29.676589  318162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 12:26:29.680321  318162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 12:26:29.680396  318162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 12:26:29.721388  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 12:26:29.729191  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 12:26:29.737665  318162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 12:26:29.741264  318162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 12:26:29.741320  318162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 12:26:29.782228  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 12:26:29.790065  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 12:26:29.798522  318162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:26:29.802298  318162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:26:29.802358  318162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 12:26:29.861693  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 12:26:29.879969  318162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 12:26:29.885850  318162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 12:26:29.951227  318162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 12:26:30.041897  318162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 12:26:30.164232  318162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 12:26:30.281453  318162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 12:26:30.388358  318162 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 12:26:30.454751  318162 kubeadm.go:400] StartCluster: {Name:functional-749689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-749689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:26:30.454824  318162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 12:26:30.454897  318162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:26:30.508531  318162 cri.go:89] found id: "2d45169308af49c369fce97e854a397dc33526bfd801d626d0fbdcba58167475"
	I1020 12:26:30.508542  318162 cri.go:89] found id: "26d2adaf302946362d68acf0636d319af027839502b4d9b72fc9a757c674c544"
	I1020 12:26:30.508545  318162 cri.go:89] found id: "86ec7e4902cc6a1fb0ceb5cd190c91c43119725838312ec0ddffcb23c21cf208"
	I1020 12:26:30.508548  318162 cri.go:89] found id: "ce6fd19056f4cda709af3717cb459983c0fc33929db7e308b66c3dbc50bc399d"
	I1020 12:26:30.508550  318162 cri.go:89] found id: "dcd91f0f413b8988fc5931195d38643f76c1bdf97e6e1b2018f259a92f7696d0"
	I1020 12:26:30.508553  318162 cri.go:89] found id: "4466668144c1fbc2afec90f9137b40def52496500164b683dd24e17a7b02b3ea"
	I1020 12:26:30.508555  318162 cri.go:89] found id: "5bcc12f33380c080af49ccd471bad8e39585aa701ff8b99ceb197a75447784b6"
	I1020 12:26:30.508557  318162 cri.go:89] found id: "40921d7031bcb0a3ca44e5b54df8a4bd8e09a0b2c48f5da3956984ad6df9226d"
	I1020 12:26:30.508561  318162 cri.go:89] found id: "61ca56825c04158386a57f7ff2bd1020e622ff1dea6c974e0343914f857e07e4"
	I1020 12:26:30.508567  318162 cri.go:89] found id: "dbb8cf9eab27ea6489e0462ea71d8faccb08525aa645f8ab6eefdff91c02d6bc"
	I1020 12:26:30.508569  318162 cri.go:89] found id: "367209197dd38944b9884f92e1a4a5fbe17aa36ea2b8996c1c223d65ed77aed2"
	I1020 12:26:30.508578  318162 cri.go:89] found id: "0e6c711846a5ed5b058004877b767f7a52d06b50a424d8fb2c8351623db3db26"
	I1020 12:26:30.508581  318162 cri.go:89] found id: "039c90e0cd9dc855304fed8b07ea4d31020bf9f305f068f018a009b30d23165f"
	I1020 12:26:30.508583  318162 cri.go:89] found id: "467bc17ff3f68daa18e462410fe3f42a481cc43062ac0ed709d7424e5570fd38"
	I1020 12:26:30.508585  318162 cri.go:89] found id: "21b8f30d1a664871ae1779e8d1b41c9506b8498c415a6b64346341eb99bcae21"
	I1020 12:26:30.508589  318162 cri.go:89] found id: "c9e8bbf0c35a4eab7f6fa3eeb636c9e5f393bf00df6fa16a34c354ebdeca87f1"
	I1020 12:26:30.508591  318162 cri.go:89] found id: ""
	I1020 12:26:30.508643  318162 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 12:26:30.523489  318162 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:26:30Z" level=error msg="open /run/runc: no such file or directory"
	I1020 12:26:30.523569  318162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 12:26:30.536229  318162 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 12:26:30.536239  318162 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 12:26:30.536291  318162 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 12:26:30.545549  318162 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:26:30.546139  318162 kubeconfig.go:125] found "functional-749689" server: "https://192.168.49.2:8441"
	I1020 12:26:30.547483  318162 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 12:26:30.561891  318162 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-20 12:24:29.349350225 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-20 12:26:29.294908371 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1020 12:26:30.561899  318162 kubeadm.go:1160] stopping kube-system containers ...
	I1020 12:26:30.561909  318162 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1020 12:26:30.561978  318162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 12:26:30.611978  318162 cri.go:89] found id: "2d45169308af49c369fce97e854a397dc33526bfd801d626d0fbdcba58167475"
	I1020 12:26:30.611989  318162 cri.go:89] found id: "26d2adaf302946362d68acf0636d319af027839502b4d9b72fc9a757c674c544"
	I1020 12:26:30.611992  318162 cri.go:89] found id: "86ec7e4902cc6a1fb0ceb5cd190c91c43119725838312ec0ddffcb23c21cf208"
	I1020 12:26:30.611995  318162 cri.go:89] found id: "ce6fd19056f4cda709af3717cb459983c0fc33929db7e308b66c3dbc50bc399d"
	I1020 12:26:30.611997  318162 cri.go:89] found id: "dcd91f0f413b8988fc5931195d38643f76c1bdf97e6e1b2018f259a92f7696d0"
	I1020 12:26:30.612000  318162 cri.go:89] found id: "4466668144c1fbc2afec90f9137b40def52496500164b683dd24e17a7b02b3ea"
	I1020 12:26:30.612003  318162 cri.go:89] found id: "5bcc12f33380c080af49ccd471bad8e39585aa701ff8b99ceb197a75447784b6"
	I1020 12:26:30.612005  318162 cri.go:89] found id: "40921d7031bcb0a3ca44e5b54df8a4bd8e09a0b2c48f5da3956984ad6df9226d"
	I1020 12:26:30.612012  318162 cri.go:89] found id: "61ca56825c04158386a57f7ff2bd1020e622ff1dea6c974e0343914f857e07e4"
	I1020 12:26:30.612018  318162 cri.go:89] found id: "dbb8cf9eab27ea6489e0462ea71d8faccb08525aa645f8ab6eefdff91c02d6bc"
	I1020 12:26:30.612028  318162 cri.go:89] found id: "367209197dd38944b9884f92e1a4a5fbe17aa36ea2b8996c1c223d65ed77aed2"
	I1020 12:26:30.612030  318162 cri.go:89] found id: "0e6c711846a5ed5b058004877b767f7a52d06b50a424d8fb2c8351623db3db26"
	I1020 12:26:30.612032  318162 cri.go:89] found id: "039c90e0cd9dc855304fed8b07ea4d31020bf9f305f068f018a009b30d23165f"
	I1020 12:26:30.612034  318162 cri.go:89] found id: "467bc17ff3f68daa18e462410fe3f42a481cc43062ac0ed709d7424e5570fd38"
	I1020 12:26:30.612036  318162 cri.go:89] found id: "21b8f30d1a664871ae1779e8d1b41c9506b8498c415a6b64346341eb99bcae21"
	I1020 12:26:30.612039  318162 cri.go:89] found id: "c9e8bbf0c35a4eab7f6fa3eeb636c9e5f393bf00df6fa16a34c354ebdeca87f1"
	I1020 12:26:30.612041  318162 cri.go:89] found id: ""
	I1020 12:26:30.612046  318162 cri.go:252] Stopping containers: [2d45169308af49c369fce97e854a397dc33526bfd801d626d0fbdcba58167475 26d2adaf302946362d68acf0636d319af027839502b4d9b72fc9a757c674c544 86ec7e4902cc6a1fb0ceb5cd190c91c43119725838312ec0ddffcb23c21cf208 ce6fd19056f4cda709af3717cb459983c0fc33929db7e308b66c3dbc50bc399d dcd91f0f413b8988fc5931195d38643f76c1bdf97e6e1b2018f259a92f7696d0 4466668144c1fbc2afec90f9137b40def52496500164b683dd24e17a7b02b3ea 5bcc12f33380c080af49ccd471bad8e39585aa701ff8b99ceb197a75447784b6 40921d7031bcb0a3ca44e5b54df8a4bd8e09a0b2c48f5da3956984ad6df9226d 61ca56825c04158386a57f7ff2bd1020e622ff1dea6c974e0343914f857e07e4 dbb8cf9eab27ea6489e0462ea71d8faccb08525aa645f8ab6eefdff91c02d6bc 367209197dd38944b9884f92e1a4a5fbe17aa36ea2b8996c1c223d65ed77aed2 0e6c711846a5ed5b058004877b767f7a52d06b50a424d8fb2c8351623db3db26 039c90e0cd9dc855304fed8b07ea4d31020bf9f305f068f018a009b30d23165f 467bc17ff3f68daa18e462410fe3f42a481cc43062ac0ed709d7424e5570fd38 21b8f30d1a664871ae1779e8d1b41c9506b8498c4
15a6b64346341eb99bcae21 c9e8bbf0c35a4eab7f6fa3eeb636c9e5f393bf00df6fa16a34c354ebdeca87f1]
	I1020 12:26:30.612103  318162 ssh_runner.go:195] Run: which crictl
	I1020 12:26:30.616223  318162 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 2d45169308af49c369fce97e854a397dc33526bfd801d626d0fbdcba58167475 26d2adaf302946362d68acf0636d319af027839502b4d9b72fc9a757c674c544 86ec7e4902cc6a1fb0ceb5cd190c91c43119725838312ec0ddffcb23c21cf208 ce6fd19056f4cda709af3717cb459983c0fc33929db7e308b66c3dbc50bc399d dcd91f0f413b8988fc5931195d38643f76c1bdf97e6e1b2018f259a92f7696d0 4466668144c1fbc2afec90f9137b40def52496500164b683dd24e17a7b02b3ea 5bcc12f33380c080af49ccd471bad8e39585aa701ff8b99ceb197a75447784b6 40921d7031bcb0a3ca44e5b54df8a4bd8e09a0b2c48f5da3956984ad6df9226d 61ca56825c04158386a57f7ff2bd1020e622ff1dea6c974e0343914f857e07e4 dbb8cf9eab27ea6489e0462ea71d8faccb08525aa645f8ab6eefdff91c02d6bc 367209197dd38944b9884f92e1a4a5fbe17aa36ea2b8996c1c223d65ed77aed2 0e6c711846a5ed5b058004877b767f7a52d06b50a424d8fb2c8351623db3db26 039c90e0cd9dc855304fed8b07ea4d31020bf9f305f068f018a009b30d23165f 467bc17ff3f68daa18e462410fe3f42a481cc43062ac0ed709d7424e5570fd38 21b8f3
0d1a664871ae1779e8d1b41c9506b8498c415a6b64346341eb99bcae21 c9e8bbf0c35a4eab7f6fa3eeb636c9e5f393bf00df6fa16a34c354ebdeca87f1
	I1020 12:26:46.890349  318162 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 2d45169308af49c369fce97e854a397dc33526bfd801d626d0fbdcba58167475 26d2adaf302946362d68acf0636d319af027839502b4d9b72fc9a757c674c544 86ec7e4902cc6a1fb0ceb5cd190c91c43119725838312ec0ddffcb23c21cf208 ce6fd19056f4cda709af3717cb459983c0fc33929db7e308b66c3dbc50bc399d dcd91f0f413b8988fc5931195d38643f76c1bdf97e6e1b2018f259a92f7696d0 4466668144c1fbc2afec90f9137b40def52496500164b683dd24e17a7b02b3ea 5bcc12f33380c080af49ccd471bad8e39585aa701ff8b99ceb197a75447784b6 40921d7031bcb0a3ca44e5b54df8a4bd8e09a0b2c48f5da3956984ad6df9226d 61ca56825c04158386a57f7ff2bd1020e622ff1dea6c974e0343914f857e07e4 dbb8cf9eab27ea6489e0462ea71d8faccb08525aa645f8ab6eefdff91c02d6bc 367209197dd38944b9884f92e1a4a5fbe17aa36ea2b8996c1c223d65ed77aed2 0e6c711846a5ed5b058004877b767f7a52d06b50a424d8fb2c8351623db3db26 039c90e0cd9dc855304fed8b07ea4d31020bf9f305f068f018a009b30d23165f 467bc17ff3f68daa18e462410fe3f42a481cc43062ac0ed709d7424e5570fd38
21b8f30d1a664871ae1779e8d1b41c9506b8498c415a6b64346341eb99bcae21 c9e8bbf0c35a4eab7f6fa3eeb636c9e5f393bf00df6fa16a34c354ebdeca87f1: (16.274074587s)
	I1020 12:26:46.890415  318162 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1020 12:26:47.000591  318162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 12:26:47.010100  318162 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct 20 12:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 20 12:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 20 12:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 20 12:24 /etc/kubernetes/scheduler.conf
	
	I1020 12:26:47.010156  318162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1020 12:26:47.018049  318162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1020 12:26:47.025341  318162 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:26:47.025397  318162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 12:26:47.032793  318162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1020 12:26:47.040034  318162 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:26:47.040086  318162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 12:26:47.047015  318162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1020 12:26:47.054375  318162 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1020 12:26:47.054431  318162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 12:26:47.061759  318162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 12:26:47.069577  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:26:47.118883  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:26:49.850230  318162 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.731316923s)
	I1020 12:26:49.850289  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:26:50.084431  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:26:50.159472  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:26:50.236789  318162 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:26:50.236859  318162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:26:50.736993  318162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:26:51.237518  318162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:26:51.253972  318162 api_server.go:72] duration metric: took 1.017199865s to wait for apiserver process to appear ...
	I1020 12:26:51.253986  318162 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:26:51.254004  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:51.254350  318162 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I1020 12:26:51.755019  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:54.703514  318162 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 12:26:54.703532  318162 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 12:26:54.703545  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:54.823477  318162 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 12:26:54.823494  318162 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 12:26:54.823509  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:54.898923  318162 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 12:26:54.898940  318162 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 12:26:55.254368  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:55.264323  318162 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:26:55.264344  318162 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:26:55.754804  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:55.766432  318162 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 12:26:55.766462  318162 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 12:26:56.254643  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:56.262566  318162 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1020 12:26:56.276321  318162 api_server.go:141] control plane version: v1.34.1
	I1020 12:26:56.276336  318162 api_server.go:131] duration metric: took 5.022344607s to wait for apiserver health ...
	I1020 12:26:56.276343  318162 cni.go:84] Creating CNI manager for ""
	I1020 12:26:56.276351  318162 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:26:56.279920  318162 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 12:26:56.282917  318162 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 12:26:56.287014  318162 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 12:26:56.287039  318162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 12:26:56.300230  318162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 12:26:56.744123  318162 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:26:56.747870  318162 system_pods.go:59] 8 kube-system pods found
	I1020 12:26:56.747904  318162 system_pods.go:61] "coredns-66bc5c9577-zlbfx" [d346e77c-d5ef-4c64-a952-c1ef34cf363c] Running
	I1020 12:26:56.747913  318162 system_pods.go:61] "etcd-functional-749689" [37cf7abd-ef26-46db-97f5-745198bbdb15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:26:56.747917  318162 system_pods.go:61] "kindnet-ghrhh" [a55f92e0-eceb-43f5-a7e3-023ca7b262b7] Running
	I1020 12:26:56.747924  318162 system_pods.go:61] "kube-apiserver-functional-749689" [b0cecf9d-4ee9-4f3b-a2f0-c3ab7bdf00b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:26:56.747930  318162 system_pods.go:61] "kube-controller-manager-functional-749689" [d998ab2b-05f9-4c03-b26e-c940776178fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:26:56.747935  318162 system_pods.go:61] "kube-proxy-2ljkr" [2f417c89-ad08-4def-8447-11191a092040] Running
	I1020 12:26:56.747941  318162 system_pods.go:61] "kube-scheduler-functional-749689" [3682b8a1-4397-45d2-9c89-b1e6f984a9d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:26:56.747944  318162 system_pods.go:61] "storage-provisioner" [decc9dd8-81b4-4f9e-961d-f2d9ab7a76b6] Running
	I1020 12:26:56.747949  318162 system_pods.go:74] duration metric: took 3.815746ms to wait for pod list to return data ...
	I1020 12:26:56.747955  318162 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:26:56.751209  318162 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 12:26:56.751228  318162 node_conditions.go:123] node cpu capacity is 2
	I1020 12:26:56.751238  318162 node_conditions.go:105] duration metric: took 3.279698ms to run NodePressure ...
	I1020 12:26:56.751325  318162 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1020 12:26:57.036023  318162 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1020 12:26:57.039557  318162 kubeadm.go:743] kubelet initialised
	I1020 12:26:57.039567  318162 kubeadm.go:744] duration metric: took 3.53324ms waiting for restarted kubelet to initialise ...
	I1020 12:26:57.039581  318162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 12:26:57.048782  318162 ops.go:34] apiserver oom_adj: -16
	I1020 12:26:57.048795  318162 kubeadm.go:601] duration metric: took 26.512550802s to restartPrimaryControlPlane
	I1020 12:26:57.048803  318162 kubeadm.go:402] duration metric: took 26.594060853s to StartCluster
	I1020 12:26:57.048817  318162 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:26:57.048899  318162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:26:57.049592  318162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:26:57.049849  318162 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 12:26:57.050083  318162 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:26:57.050121  318162 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 12:26:57.050183  318162 addons.go:69] Setting storage-provisioner=true in profile "functional-749689"
	I1020 12:26:57.050195  318162 addons.go:238] Setting addon storage-provisioner=true in "functional-749689"
	W1020 12:26:57.050200  318162 addons.go:247] addon storage-provisioner should already be in state true
	I1020 12:26:57.050240  318162 host.go:66] Checking if "functional-749689" exists ...
	I1020 12:26:57.050690  318162 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
	I1020 12:26:57.050850  318162 addons.go:69] Setting default-storageclass=true in profile "functional-749689"
	I1020 12:26:57.050861  318162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-749689"
	I1020 12:26:57.051139  318162 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
	I1020 12:26:57.054638  318162 out.go:179] * Verifying Kubernetes components...
	I1020 12:26:57.057758  318162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 12:26:57.086102  318162 addons.go:238] Setting addon default-storageclass=true in "functional-749689"
	W1020 12:26:57.086112  318162 addons.go:247] addon default-storageclass should already be in state true
	I1020 12:26:57.086136  318162 host.go:66] Checking if "functional-749689" exists ...
	I1020 12:26:57.086552  318162 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
	I1020 12:26:57.088777  318162 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 12:26:57.091678  318162 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:26:57.091689  318162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 12:26:57.091780  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:57.115133  318162 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 12:26:57.115147  318162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 12:26:57.115207  318162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:26:57.159214  318162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:26:57.159878  318162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:26:57.271898  318162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 12:26:57.286564  318162 node_ready.go:35] waiting up to 6m0s for node "functional-749689" to be "Ready" ...
	I1020 12:26:57.289695  318162 node_ready.go:49] node "functional-749689" is "Ready"
	I1020 12:26:57.289710  318162 node_ready.go:38] duration metric: took 3.118613ms for node "functional-749689" to be "Ready" ...
	I1020 12:26:57.289721  318162 api_server.go:52] waiting for apiserver process to appear ...
	I1020 12:26:57.289779  318162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:26:57.306833  318162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 12:26:57.314911  318162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 12:26:57.315840  318162 api_server.go:72] duration metric: took 265.968222ms to wait for apiserver process to appear ...
	I1020 12:26:57.315850  318162 api_server.go:88] waiting for apiserver healthz status ...
	I1020 12:26:57.315867  318162 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1020 12:26:57.328945  318162 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1020 12:26:57.333584  318162 api_server.go:141] control plane version: v1.34.1
	I1020 12:26:57.333600  318162 api_server.go:131] duration metric: took 17.744273ms to wait for apiserver health ...
	I1020 12:26:57.333607  318162 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 12:26:57.337529  318162 system_pods.go:59] 8 kube-system pods found
	I1020 12:26:57.337545  318162 system_pods.go:61] "coredns-66bc5c9577-zlbfx" [d346e77c-d5ef-4c64-a952-c1ef34cf363c] Running
	I1020 12:26:57.337553  318162 system_pods.go:61] "etcd-functional-749689" [37cf7abd-ef26-46db-97f5-745198bbdb15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:26:57.337557  318162 system_pods.go:61] "kindnet-ghrhh" [a55f92e0-eceb-43f5-a7e3-023ca7b262b7] Running
	I1020 12:26:57.337563  318162 system_pods.go:61] "kube-apiserver-functional-749689" [b0cecf9d-4ee9-4f3b-a2f0-c3ab7bdf00b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:26:57.337569  318162 system_pods.go:61] "kube-controller-manager-functional-749689" [d998ab2b-05f9-4c03-b26e-c940776178fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:26:57.337573  318162 system_pods.go:61] "kube-proxy-2ljkr" [2f417c89-ad08-4def-8447-11191a092040] Running
	I1020 12:26:57.337578  318162 system_pods.go:61] "kube-scheduler-functional-749689" [3682b8a1-4397-45d2-9c89-b1e6f984a9d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:26:57.337581  318162 system_pods.go:61] "storage-provisioner" [decc9dd8-81b4-4f9e-961d-f2d9ab7a76b6] Running
	I1020 12:26:57.337586  318162 system_pods.go:74] duration metric: took 3.974444ms to wait for pod list to return data ...
	I1020 12:26:57.337593  318162 default_sa.go:34] waiting for default service account to be created ...
	I1020 12:26:57.341551  318162 default_sa.go:45] found service account: "default"
	I1020 12:26:57.341565  318162 default_sa.go:55] duration metric: took 3.967634ms for default service account to be created ...
	I1020 12:26:57.341572  318162 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 12:26:57.347424  318162 system_pods.go:86] 8 kube-system pods found
	I1020 12:26:57.347440  318162 system_pods.go:89] "coredns-66bc5c9577-zlbfx" [d346e77c-d5ef-4c64-a952-c1ef34cf363c] Running
	I1020 12:26:57.347448  318162 system_pods.go:89] "etcd-functional-749689" [37cf7abd-ef26-46db-97f5-745198bbdb15] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 12:26:57.347460  318162 system_pods.go:89] "kindnet-ghrhh" [a55f92e0-eceb-43f5-a7e3-023ca7b262b7] Running
	I1020 12:26:57.347466  318162 system_pods.go:89] "kube-apiserver-functional-749689" [b0cecf9d-4ee9-4f3b-a2f0-c3ab7bdf00b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 12:26:57.347471  318162 system_pods.go:89] "kube-controller-manager-functional-749689" [d998ab2b-05f9-4c03-b26e-c940776178fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 12:26:57.347479  318162 system_pods.go:89] "kube-proxy-2ljkr" [2f417c89-ad08-4def-8447-11191a092040] Running
	I1020 12:26:57.347484  318162 system_pods.go:89] "kube-scheduler-functional-749689" [3682b8a1-4397-45d2-9c89-b1e6f984a9d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 12:26:57.347487  318162 system_pods.go:89] "storage-provisioner" [decc9dd8-81b4-4f9e-961d-f2d9ab7a76b6] Running
	I1020 12:26:57.347494  318162 system_pods.go:126] duration metric: took 5.917041ms to wait for k8s-apps to be running ...
	I1020 12:26:57.347501  318162 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 12:26:57.347559  318162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:26:58.267646  318162 system_svc.go:56] duration metric: took 920.138435ms WaitForService to wait for kubelet
	I1020 12:26:58.267658  318162 kubeadm.go:586] duration metric: took 1.217790656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 12:26:58.267675  318162 node_conditions.go:102] verifying NodePressure condition ...
	I1020 12:26:58.273252  318162 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 12:26:58.273266  318162 node_conditions.go:123] node cpu capacity is 2
	I1020 12:26:58.273276  318162 node_conditions.go:105] duration metric: took 5.597209ms to run NodePressure ...
	I1020 12:26:58.273287  318162 start.go:241] waiting for startup goroutines ...
	I1020 12:26:58.279191  318162 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 12:26:58.282116  318162 addons.go:514] duration metric: took 1.231980347s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 12:26:58.282153  318162 start.go:246] waiting for cluster config update ...
	I1020 12:26:58.282164  318162 start.go:255] writing updated cluster config ...
	I1020 12:26:58.282453  318162 ssh_runner.go:195] Run: rm -f paused
	I1020 12:26:58.285884  318162 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:26:58.289438  318162 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zlbfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:26:58.293962  318162 pod_ready.go:94] pod "coredns-66bc5c9577-zlbfx" is "Ready"
	I1020 12:26:58.293974  318162 pod_ready.go:86] duration metric: took 4.525056ms for pod "coredns-66bc5c9577-zlbfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:26:58.296315  318162 pod_ready.go:83] waiting for pod "etcd-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 12:27:00.309453  318162 pod_ready.go:104] pod "etcd-functional-749689" is not "Ready", error: <nil>
	W1020 12:27:02.802289  318162 pod_ready.go:104] pod "etcd-functional-749689" is not "Ready", error: <nil>
	W1020 12:27:05.302020  318162 pod_ready.go:104] pod "etcd-functional-749689" is not "Ready", error: <nil>
	I1020 12:27:06.802176  318162 pod_ready.go:94] pod "etcd-functional-749689" is "Ready"
	I1020 12:27:06.802191  318162 pod_ready.go:86] duration metric: took 8.505863954s for pod "etcd-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:06.804663  318162 pod_ready.go:83] waiting for pod "kube-apiserver-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:06.809265  318162 pod_ready.go:94] pod "kube-apiserver-functional-749689" is "Ready"
	I1020 12:27:06.809279  318162 pod_ready.go:86] duration metric: took 4.602431ms for pod "kube-apiserver-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:06.811786  318162 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:06.816719  318162 pod_ready.go:94] pod "kube-controller-manager-functional-749689" is "Ready"
	I1020 12:27:06.816733  318162 pod_ready.go:86] duration metric: took 4.934628ms for pod "kube-controller-manager-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:06.819095  318162 pod_ready.go:83] waiting for pod "kube-proxy-2ljkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:07.001448  318162 pod_ready.go:94] pod "kube-proxy-2ljkr" is "Ready"
	I1020 12:27:07.001464  318162 pod_ready.go:86] duration metric: took 182.355784ms for pod "kube-proxy-2ljkr" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:07.199346  318162 pod_ready.go:83] waiting for pod "kube-scheduler-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:07.599348  318162 pod_ready.go:94] pod "kube-scheduler-functional-749689" is "Ready"
	I1020 12:27:07.599363  318162 pod_ready.go:86] duration metric: took 400.004231ms for pod "kube-scheduler-functional-749689" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 12:27:07.599374  318162 pod_ready.go:40] duration metric: took 9.313470545s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 12:27:07.651795  318162 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 12:27:07.654807  318162 out.go:179] * Done! kubectl is now configured to use "functional-749689" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 12:27:43 functional-749689 crio[3547]: time="2025-10-20T12:27:43.295689155Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-8kxx6 Namespace:default ID:4d8b36ff7a43bccbe51dcc8f8f68d1e4eec4d61f4ed66a1345c30aa88027fedf UID:96a97577-ad88-497c-a539-f8cbdd055f3e NetNS:/var/run/netns/6db1bef4-01e2-4ff3-8f69-ed178ab7c87f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078d38}] Aliases:map[]}"
	Oct 20 12:27:43 functional-749689 crio[3547]: time="2025-10-20T12:27:43.295832664Z" level=info msg="Checking pod default_hello-node-75c85bcc94-8kxx6 for CNI network kindnet (type=ptp)"
	Oct 20 12:27:43 functional-749689 crio[3547]: time="2025-10-20T12:27:43.300316785Z" level=info msg="Ran pod sandbox 4d8b36ff7a43bccbe51dcc8f8f68d1e4eec4d61f4ed66a1345c30aa88027fedf with infra container: default/hello-node-75c85bcc94-8kxx6/POD" id=9033eed7-3e74-46ae-8935-494bbbb150c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 12:27:43 functional-749689 crio[3547]: time="2025-10-20T12:27:43.302001319Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2fa299dd-654d-495a-9c2c-71392ef125da name=/runtime.v1.ImageService/PullImage
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.324613653Z" level=info msg="Stopping pod sandbox: db224aab5a1bce1bd9feb743b958380622a7e1819d2a94dcb8b4cb6e282b333a" id=b18294b7-c606-4e30-8c19-e979ac357ba1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.324669785Z" level=info msg="Stopped pod sandbox (already stopped): db224aab5a1bce1bd9feb743b958380622a7e1819d2a94dcb8b4cb6e282b333a" id=b18294b7-c606-4e30-8c19-e979ac357ba1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.325322413Z" level=info msg="Removing pod sandbox: db224aab5a1bce1bd9feb743b958380622a7e1819d2a94dcb8b4cb6e282b333a" id=5b28bb77-4be1-4e90-91ba-04cc72c9d0ea name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.329061309Z" level=info msg="Removed pod sandbox: db224aab5a1bce1bd9feb743b958380622a7e1819d2a94dcb8b4cb6e282b333a" id=5b28bb77-4be1-4e90-91ba-04cc72c9d0ea name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.329872339Z" level=info msg="Stopping pod sandbox: 49b20ad7a7ed3c9deaba7e3f223652c5bf0212319c78eb04bc1050a60fb9c0e8" id=7ae96516-16c6-499d-b8b5-a65c21ad0886 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.329927478Z" level=info msg="Stopped pod sandbox (already stopped): 49b20ad7a7ed3c9deaba7e3f223652c5bf0212319c78eb04bc1050a60fb9c0e8" id=7ae96516-16c6-499d-b8b5-a65c21ad0886 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.330269915Z" level=info msg="Removing pod sandbox: 49b20ad7a7ed3c9deaba7e3f223652c5bf0212319c78eb04bc1050a60fb9c0e8" id=7f1d081a-5f24-4108-923a-7364bf5a9648 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.333717558Z" level=info msg="Removed pod sandbox: 49b20ad7a7ed3c9deaba7e3f223652c5bf0212319c78eb04bc1050a60fb9c0e8" id=7f1d081a-5f24-4108-923a-7364bf5a9648 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.334365608Z" level=info msg="Stopping pod sandbox: 2a87795a2c9f5f12afaab0aee359e0795ad810c1817f0cd4be9aea1bcdbe2fcf" id=daecf054-0d3d-43f8-a754-5517529b13be name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.334412944Z" level=info msg="Stopped pod sandbox (already stopped): 2a87795a2c9f5f12afaab0aee359e0795ad810c1817f0cd4be9aea1bcdbe2fcf" id=daecf054-0d3d-43f8-a754-5517529b13be name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.33476613Z" level=info msg="Removing pod sandbox: 2a87795a2c9f5f12afaab0aee359e0795ad810c1817f0cd4be9aea1bcdbe2fcf" id=1b73b38a-7dfe-429b-a6e4-aa5eb7b52b70 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:27:50 functional-749689 crio[3547]: time="2025-10-20T12:27:50.338502515Z" level=info msg="Removed pod sandbox: 2a87795a2c9f5f12afaab0aee359e0795ad810c1817f0cd4be9aea1bcdbe2fcf" id=1b73b38a-7dfe-429b-a6e4-aa5eb7b52b70 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 20 12:27:55 functional-749689 crio[3547]: time="2025-10-20T12:27:55.210825442Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=59f0f1f2-c763-4dd3-b73b-38daba0cbfa2 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:28:08 functional-749689 crio[3547]: time="2025-10-20T12:28:08.212078193Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6ce6e2b5-cfc2-4094-8922-cc5f2c8c09c4 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:28:23 functional-749689 crio[3547]: time="2025-10-20T12:28:23.211041743Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fe9effb7-ef55-4cad-b1ab-e150402e4ea7 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:28:57 functional-749689 crio[3547]: time="2025-10-20T12:28:57.211252972Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9d716dab-737c-45b2-ba80-36344b09bcd4 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:29:12 functional-749689 crio[3547]: time="2025-10-20T12:29:12.211357842Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7913b44d-0291-429f-81b5-b6005aab315c name=/runtime.v1.ImageService/PullImage
	Oct 20 12:30:20 functional-749689 crio[3547]: time="2025-10-20T12:30:20.212190955Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fbd088e6-3e3a-41cc-9ed1-7362dd3e6b55 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:30:34 functional-749689 crio[3547]: time="2025-10-20T12:30:34.211352896Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=eb8038b2-f73a-4711-af97-d71520033148 name=/runtime.v1.ImageService/PullImage
	Oct 20 12:33:05 functional-749689 crio[3547]: time="2025-10-20T12:33:05.211631229Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d37ad97b-c601-4b8b-9141-acb0a0cb236e name=/runtime.v1.ImageService/PullImage
	Oct 20 12:33:26 functional-749689 crio[3547]: time="2025-10-20T12:33:26.212234373Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cd2e874d-2464-4715-969c-fdaa7e5cf67c name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ff5fd6232b732       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   e837139619c88       sp-pod                                      default
	c58f3665f89f8       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   a0418223fd037       nginx-svc                                   default
	afc796fb8d43e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   859d3e5422da2       kube-proxy-2ljkr                            kube-system
	b84bce134a9e3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   d509a24d91343       storage-provisioner                         kube-system
	c7c41b8424dfb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   a2446ef992fdd       coredns-66bc5c9577-zlbfx                    kube-system
	e6db0b46f5dde       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   c2a1df75af34f       kindnet-ghrhh                               kube-system
	ec4a76f452686       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   74b02d68abc43       kube-apiserver-functional-749689            kube-system
	e6b13b383e20c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   e1e476eefd2c7       kube-controller-manager-functional-749689   kube-system
	b8898d5f40520       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   454aa6b2c7d9b       kube-scheduler-functional-749689            kube-system
	69482ab0b622a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   477cd6cbea755       etcd-functional-749689                      kube-system
	2d45169308af4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Exited              etcd                      2                   477cd6cbea755       etcd-functional-749689                      kube-system
	26d2adaf30294       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Exited              kube-controller-manager   2                   e1e476eefd2c7       kube-controller-manager-functional-749689   kube-system
	ce6fd19056f4c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Exited              coredns                   2                   a2446ef992fdd       coredns-66bc5c9577-zlbfx                    kube-system
	dcd91f0f413b8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Exited              storage-provisioner       2                   d509a24d91343       storage-provisioner                         kube-system
	4466668144c1f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Exited              kube-proxy                2                   859d3e5422da2       kube-proxy-2ljkr                            kube-system
	5bcc12f33380c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Exited              kindnet-cni               2                   c2a1df75af34f       kindnet-ghrhh                               kube-system
	40921d7031bcb       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Exited              kube-scheduler            2                   454aa6b2c7d9b       kube-scheduler-functional-749689            kube-system
	
	
	==> coredns [c7c41b8424dfbe02051ed83f6321f6ed48f3b26d56413375bafb335f5afa4d67] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51218 - 30743 "HINFO IN 5010627736629057162.4330380778277217620. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014577306s
	
	
	==> coredns [ce6fd19056f4cda709af3717cb459983c0fc33929db7e308b66c3dbc50bc399d] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35631 - 51186 "HINFO IN 880642428108583738.7146369171066786226. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010980098s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-749689
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-749689
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=functional-749689
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T12_24_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 12:24:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-749689
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 12:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 12:34:45 +0000   Mon, 20 Oct 2025 12:24:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 12:34:45 +0000   Mon, 20 Oct 2025 12:24:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 12:34:45 +0000   Mon, 20 Oct 2025 12:24:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 12:34:45 +0000   Mon, 20 Oct 2025 12:25:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-749689
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e1151de4-2bf2-4c00-a195-a75816ce28e8
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8kxx6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-9hrf6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-zlbfx                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-749689                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-ghrhh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-749689             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-749689    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-2ljkr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-749689             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-749689 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-749689 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-749689 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-749689 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-749689 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-749689 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-749689 event: Registered Node functional-749689 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-749689 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-749689 event: Registered Node functional-749689 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-749689 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-749689 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-749689 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-749689 event: Registered Node functional-749689 in Controller
	
	
	==> dmesg <==
	[Oct20 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016790] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.502629] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033585] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.794361] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.786595] kauditd_printk_skb: 36 callbacks suppressed
	[Oct20 11:29] hrtimer: interrupt took 3085842 ns
	[Oct20 12:16] kauditd_printk_skb: 8 callbacks suppressed
	[Oct20 12:17] overlayfs: idmapped layers are currently not supported
	[  +0.065938] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct20 12:23] overlayfs: idmapped layers are currently not supported
	[Oct20 12:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [2d45169308af49c369fce97e854a397dc33526bfd801d626d0fbdcba58167475] <==
	{"level":"info","ts":"2025-10-20T12:26:30.485502Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","recovered-remote-peer-id":"aec36adc501070cc","recovered-remote-peer-urls":["https://192.168.49.2:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-20T12:26:30.488514Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-20T12:26:30.488648Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-10-20T12:26:30.488750Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-10-20T12:26:30.488871Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=()"}
	{"level":"info","ts":"2025-10-20T12:26:30.488951Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"aec36adc501070cc became follower at term 3"}
	{"level":"info","ts":"2025-10-20T12:26:30.489003Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft aec36adc501070cc [peers: [], term: 3, commit: 602, applied: 0, lastindex: 602, lastterm: 3]"}
	{"level":"warn","ts":"2025-10-20T12:26:30.500759Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-10-20T12:26:30.560509Z","caller":"mvcc/kvstore.go:408","msg":"kvstore restored","current-rev":573}
	{"level":"info","ts":"2025-10-20T12:26:30.612222Z","caller":"storage/quota.go:93","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-10-20T12:26:30.613244Z","caller":"etcdserver/corrupt.go:91","msg":"starting initial corruption check","local-member-id":"aec36adc501070cc","timeout":"7s"}
	{"level":"info","ts":"2025-10-20T12:26:30.613721Z","caller":"etcdserver/corrupt.go:172","msg":"initial corruption checking passed; no corruption","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-20T12:26:30.613824Z","caller":"etcdserver/server.go:589","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.6.4","cluster-id":"fa54960ea34d58be","cluster-version":"3.6"}
	{"level":"info","ts":"2025-10-20T12:26:30.614067Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-20T12:26:30.614416Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-20T12:26:30.614791Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-20T12:26:30.614719Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:26:30.637017Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:26:30.637085Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T12:26:30.614947Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-20T12:26:30.615070Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-20T12:26:30.637307Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-20T12:26:30.637610Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-10-20T12:26:30.637728Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-10-20T12:26:30.637871Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","from":"3.6","to":"3.6"}
	
	
	==> etcd [69482ab0b622adf5d1b8950bce94e0abb3de4943bee4f0956633267f8d34c65f] <==
	{"level":"warn","ts":"2025-10-20T12:26:53.229854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.254259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.283523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.311028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.334166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.369311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.433114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.457268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.474819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.516505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.552817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.578022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.628023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.657336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.671047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.706742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.713618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.735721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.761072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.780879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.798167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T12:26:53.895382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41628","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T12:36:52.076846Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1133}
	{"level":"info","ts":"2025-10-20T12:36:52.107452Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1133,"took":"30.020364ms","hash":1807265060,"current-db-size-bytes":3297280,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1429504,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-20T12:36:52.107508Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1807265060,"revision":1133,"compact-revision":-1}
	
	
	==> kernel <==
	 12:37:29 up  2:19,  0 user,  load average: 0.03, 0.30, 1.28
	Linux functional-749689 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5bcc12f33380c080af49ccd471bad8e39585aa701ff8b99ceb197a75447784b6] <==
	I1020 12:26:30.313065       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 12:26:30.320955       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1020 12:26:30.321105       1 main.go:148] setting mtu 1500 for CNI 
	I1020 12:26:30.321118       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 12:26:30.321132       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T12:26:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 12:26:30.536873       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 12:26:30.545643       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 12:26:30.545684       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 12:26:30.554381       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 12:26:40.538652       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 12:26:40.546382       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 12:26:40.555093       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 12:26:40.575785       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 12:26:41.408686       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 12:26:41.417571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 12:26:41.836626       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 12:26:42.051459       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 12:26:43.543980       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 12:26:43.562996       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 12:26:44.354098       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 12:26:44.732835       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	
	
	==> kindnet [e6db0b46f5dde04fcbd7b11fee9db61eac8e7bb6df61d2b15bc2c65d5f8aa3d8] <==
	I1020 12:35:25.915532       1 main.go:301] handling current node
	I1020 12:35:35.907638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:35:35.907670       1 main.go:301] handling current node
	I1020 12:35:45.913629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:35:45.913664       1 main.go:301] handling current node
	I1020 12:35:55.907504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:35:55.907600       1 main.go:301] handling current node
	I1020 12:36:05.907259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:36:05.907295       1 main.go:301] handling current node
	I1020 12:36:15.911573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:36:15.911690       1 main.go:301] handling current node
	I1020 12:36:25.912852       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:36:25.912975       1 main.go:301] handling current node
	I1020 12:36:35.907655       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:36:35.907688       1 main.go:301] handling current node
	I1020 12:36:45.914192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:36:45.914226       1 main.go:301] handling current node
	I1020 12:36:55.907696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:36:55.907797       1 main.go:301] handling current node
	I1020 12:37:05.907632       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:37:05.907672       1 main.go:301] handling current node
	I1020 12:37:15.908539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:37:15.908571       1 main.go:301] handling current node
	I1020 12:37:25.912338       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1020 12:37:25.912396       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ec4a76f45268610dd3dfe640c4562b59d1bab28312b41c1a6f835783df73f773] <==
	I1020 12:26:54.969656       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 12:26:54.973045       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 12:26:54.975915       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 12:26:54.978348       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 12:26:54.982728       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 12:26:54.987438       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 12:26:54.992419       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 12:26:54.999964       1 cache.go:39] Caches are synced for autoregister controller
	I1020 12:26:55.002649       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 12:26:55.273201       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 12:26:55.566658       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 12:26:56.733323       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 12:26:56.897722       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 12:26:57.012004       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 12:26:57.020659       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 12:27:10.931407       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.201.68"}
	I1020 12:27:10.948176       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 12:27:10.952560       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 12:27:17.155821       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.234.108"}
	I1020 12:27:26.716278       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 12:27:26.896730       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.224.65"}
	E1020 12:27:34.682753       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60704: use of closed network connection
	E1020 12:27:35.529980       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1020 12:27:43.030650       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.28.96"}
	I1020 12:36:54.912105       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [26d2adaf302946362d68acf0636d319af027839502b4d9b72fc9a757c674c544] <==
	
	
	==> kube-controller-manager [e6b13b383e20c564c7c6d72c7e651af402ff9da9f5029263f8dad3c7d46c81d9] <==
	I1020 12:26:58.179431       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 12:26:58.186157       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 12:26:58.187243       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 12:26:58.187590       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 12:26:58.188668       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 12:26:58.190307       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 12:26:58.192809       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 12:26:58.197164       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 12:26:58.200739       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 12:26:58.204022       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 12:26:58.206569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 12:26:58.206694       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 12:26:58.206788       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 12:26:58.206829       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 12:26:58.212473       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 12:26:58.212567       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 12:26:58.212647       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-749689"
	I1020 12:26:58.212687       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 12:26:58.217596       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 12:26:58.217701       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 12:26:58.227294       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 12:26:58.227437       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 12:26:58.235186       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 12:26:58.235310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 12:26:58.239468       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [4466668144c1fbc2afec90f9137b40def52496500164b683dd24e17a7b02b3ea] <==
	I1020 12:26:31.363726       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:26:31.458074       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1020 12:26:41.892450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-749689&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:44212->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:26:42.767893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-749689&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:26:44.973977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-749689&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [afc796fb8d43e17d462ac64067c95d61e327be0f0070983323aa7d72dda6bb9d] <==
	I1020 12:26:55.765423       1 server_linux.go:53] "Using iptables proxy"
	I1020 12:26:55.854424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 12:26:55.954675       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 12:26:55.954719       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1020 12:26:55.954795       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 12:26:56.023353       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 12:26:56.023516       1 server_linux.go:132] "Using iptables Proxier"
	I1020 12:26:56.031891       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 12:26:56.032255       1 server.go:527] "Version info" version="v1.34.1"
	I1020 12:26:56.032500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:26:56.033773       1 config.go:200] "Starting service config controller"
	I1020 12:26:56.033794       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 12:26:56.033810       1 config.go:106] "Starting endpoint slice config controller"
	I1020 12:26:56.033815       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 12:26:56.033826       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 12:26:56.033831       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 12:26:56.034606       1 config.go:309] "Starting node config controller"
	I1020 12:26:56.034628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 12:26:56.034635       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 12:26:56.134114       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 12:26:56.134119       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 12:26:56.134139       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [40921d7031bcb0a3ca44e5b54df8a4bd8e09a0b2c48f5da3956984ad6df9226d] <==
	E1020 12:26:44.863866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 12:26:44.962059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 12:26:45.046862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 12:26:45.201727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 12:26:45.204810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 12:26:45.230852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 12:26:45.807218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 12:26:45.837779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 12:26:45.905987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 12:26:45.983839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 12:26:46.134211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 12:26:46.199827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 12:26:46.216578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 12:26:46.225322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 12:26:46.254917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 12:26:46.364926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 12:26:46.382713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1020 12:26:46.750812       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1020 12:26:46.751224       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1020 12:26:46.751249       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1020 12:26:46.751270       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1020 12:26:46.751293       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:26:46.751310       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:26:46.751366       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1020 12:26:46.751388       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b8898d5f40520a2a27eaa6f3e081e520231309bd1e278801e6c2029818da12fe] <==
	I1020 12:26:53.710221       1 serving.go:386] Generated self-signed cert in-memory
	I1020 12:26:55.088456       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 12:26:55.088501       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 12:26:55.094708       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 12:26:55.094793       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 12:26:55.094814       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 12:26:55.094840       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 12:26:55.097707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:26:55.097878       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:26:55.097952       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:26:55.097992       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 12:26:55.195729       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 12:26:55.198244       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 12:26:55.198195       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 20 12:34:56 functional-749689 kubelet[4058]: E1020 12:34:56.211027    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:34:59 functional-749689 kubelet[4058]: E1020 12:34:59.211070    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:35:07 functional-749689 kubelet[4058]: E1020 12:35:07.211084    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:35:14 functional-749689 kubelet[4058]: E1020 12:35:14.210823    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:35:20 functional-749689 kubelet[4058]: E1020 12:35:20.211709    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:35:29 functional-749689 kubelet[4058]: E1020 12:35:29.211197    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:35:32 functional-749689 kubelet[4058]: E1020 12:35:32.210757    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:35:42 functional-749689 kubelet[4058]: E1020 12:35:42.213571    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:35:47 functional-749689 kubelet[4058]: E1020 12:35:47.211261    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:35:53 functional-749689 kubelet[4058]: E1020 12:35:53.210626    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:36:01 functional-749689 kubelet[4058]: E1020 12:36:01.210755    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:36:06 functional-749689 kubelet[4058]: E1020 12:36:06.211440    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:36:14 functional-749689 kubelet[4058]: E1020 12:36:14.210492    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:36:17 functional-749689 kubelet[4058]: E1020 12:36:17.210730    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:36:25 functional-749689 kubelet[4058]: E1020 12:36:25.211320    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:36:30 functional-749689 kubelet[4058]: E1020 12:36:30.211801    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:36:38 functional-749689 kubelet[4058]: E1020 12:36:38.210865    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:36:44 functional-749689 kubelet[4058]: E1020 12:36:44.211162    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:36:50 functional-749689 kubelet[4058]: E1020 12:36:50.211426    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:36:55 functional-749689 kubelet[4058]: E1020 12:36:55.210498    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:37:05 functional-749689 kubelet[4058]: E1020 12:37:05.211017    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:37:06 functional-749689 kubelet[4058]: E1020 12:37:06.211030    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:37:18 functional-749689 kubelet[4058]: E1020 12:37:18.211689    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	Oct 20 12:37:20 functional-749689 kubelet[4058]: E1020 12:37:20.211560    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9hrf6" podUID="360212d5-91f2-44ee-bfa3-aff499ca3bd5"
	Oct 20 12:37:29 functional-749689 kubelet[4058]: E1020 12:37:29.210746    4058 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8kxx6" podUID="96a97577-ad88-497c-a539-f8cbdd055f3e"
	
	
	==> storage-provisioner [b84bce134a9e31072ef2ba7a4199f2442c6355085371579dc82e89fab1c00250] <==
	W1020 12:37:03.828878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:05.832102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:05.836742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:07.840517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:07.845154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:09.849130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:09.854172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:11.857201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:11.864112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:13.867190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:13.871552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:15.874078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:15.878727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:17.881501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:17.889515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:19.894308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:19.899601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:21.904358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:21.911087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:23.914207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:23.918765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:25.921931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:25.926307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:27.930026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 12:37:27.938036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dcd91f0f413b8988fc5931195d38643f76c1bdf97e6e1b2018f259a92f7696d0] <==
	I1020 12:26:30.200803       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 12:26:30.202193       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-749689 -n functional-749689
helpers_test.go:269: (dbg) Run:  kubectl --context functional-749689 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-8kxx6 hello-node-connect-7d85dfc575-9hrf6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-749689 describe pod hello-node-75c85bcc94-8kxx6 hello-node-connect-7d85dfc575-9hrf6
helpers_test.go:290: (dbg) kubectl --context functional-749689 describe pod hello-node-75c85bcc94-8kxx6 hello-node-connect-7d85dfc575-9hrf6:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-8kxx6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-749689/192.168.49.2
	Start Time:       Mon, 20 Oct 2025 12:27:42 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g7cwz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g7cwz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8kxx6 to functional-749689
	  Normal   Pulling    6m56s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m56s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m44s (x20 over 9m47s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m30s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-9hrf6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-749689/192.168.49.2
	Start Time:       Mon, 20 Oct 2025 12:27:26 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k9qn4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k9qn4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9hrf6 to functional-749689
	  Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m10s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-749689 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-749689 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8kxx6" [96a97577-ad88-497c-a539-f8cbdd055f3e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1020 12:29:56.300084  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:30:24.009686  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:34:56.300036  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-749689 -n functional-749689
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-20 12:37:43.51274963 +0000 UTC m=+1254.551679996
functional_test.go:1460: (dbg) Run:  kubectl --context functional-749689 describe po hello-node-75c85bcc94-8kxx6 -n default
functional_test.go:1460: (dbg) kubectl --context functional-749689 describe po hello-node-75c85bcc94-8kxx6 -n default:
Name:             hello-node-75c85bcc94-8kxx6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-749689/192.168.49.2
Start Time:       Mon, 20 Oct 2025 12:27:42 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g7cwz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-g7cwz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8kxx6 to functional-749689
Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-749689 logs hello-node-75c85bcc94-8kxx6 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-749689 logs hello-node-75c85bcc94-8kxx6 -n default: exit status 1 (122.695233ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-8kxx6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-749689 logs hello-node-75c85bcc94-8kxx6 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 service --namespace=default --https --url hello-node: exit status 115 (665.720124ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30881
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-749689 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 service hello-node --url --format={{.IP}}: exit status 115 (553.14631ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-749689 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 service hello-node --url: exit status 115 (412.384579ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30881
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-749689 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30881
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image load --daemon kicbase/echo-server:functional-749689 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 image load --daemon kicbase/echo-server:functional-749689 --alsologtostderr: (2.049050006s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-749689" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image load --daemon kicbase/echo-server:functional-749689 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-749689" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
2025/10/20 12:37:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (6.930389467s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-749689
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image load --daemon kicbase/echo-server:functional-749689 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-749689" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image save kicbase/echo-server:functional-749689 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1020 12:38:04.060996  326888 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:38:04.061167  326888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:04.061175  326888 out.go:374] Setting ErrFile to fd 2...
	I1020 12:38:04.061179  326888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:38:04.061419  326888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:38:04.062068  326888 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:04.062193  326888 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:38:04.062637  326888 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
	I1020 12:38:04.081502  326888 ssh_runner.go:195] Run: systemctl --version
	I1020 12:38:04.081571  326888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
	I1020 12:38:04.100006  326888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
	I1020 12:38:04.202880  326888 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1020 12:38:04.202935  326888 cache_images.go:254] Failed to load cached images for "functional-749689": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1020 12:38:04.202957  326888 cache_images.go:266] failed pushing to: functional-749689

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-749689
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image save --daemon kicbase/echo-server:functional-749689 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-749689
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-749689: exit status 1 (18.956712ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-749689

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-749689

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.31s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-765746 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-765746 --output=json --user=testUser: exit status 80 (2.304855981s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e8257444-225e-41ee-8c55-bf4cda473358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-765746 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"18ad7ca1-7272-4be6-8470-9b0da4c86f47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-20T12:52:56Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"268f0cc2-008f-44cc-a723-ea9e1725b99d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-765746 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.31s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-765746 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-765746 --output=json --user=testUser: exit status 80 (1.680415099s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a5c63bd9-c7af-46b2-8638-a583be37b848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-765746 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"62a3774e-a277-48c3-bc1c-2739a2b23e18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-20T12:52:58Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"53383b4a-5314-49b8-a23a-ed18eb118161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-765746 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.68s)

                                                
                                    
x
+
TestPause/serial/Pause (8.08s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-255950 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-255950 --alsologtostderr -v=5: exit status 80 (2.014190145s)

                                                
                                                
-- stdout --
	* Pausing node pause-255950 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:10:40.217158  438594 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:10:40.217277  438594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:10:40.217282  438594 out.go:374] Setting ErrFile to fd 2...
	I1020 13:10:40.217287  438594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:10:40.217631  438594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:10:40.217931  438594 out.go:368] Setting JSON to false
	I1020 13:10:40.217954  438594 mustload.go:65] Loading cluster: pause-255950
	I1020 13:10:40.218472  438594 config.go:182] Loaded profile config "pause-255950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:10:40.219000  438594 cli_runner.go:164] Run: docker container inspect pause-255950 --format={{.State.Status}}
	I1020 13:10:40.243995  438594 host.go:66] Checking if "pause-255950" exists ...
	I1020 13:10:40.244337  438594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:10:40.373755  438594 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2025-10-20 13:10:40.357807241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:10:40.375380  438594 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-255950 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 13:10:40.396573  438594 out.go:179] * Pausing node pause-255950 ... 
	I1020 13:10:40.407390  438594 host.go:66] Checking if "pause-255950" exists ...
	I1020 13:10:40.407754  438594 ssh_runner.go:195] Run: systemctl --version
	I1020 13:10:40.407804  438594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-255950
	I1020 13:10:40.442747  438594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33343 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/pause-255950/id_rsa Username:docker}
	I1020 13:10:40.559359  438594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:10:40.572922  438594 pause.go:52] kubelet running: true
	I1020 13:10:40.573003  438594 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:10:40.804641  438594 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:10:40.804730  438594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:10:40.911525  438594 cri.go:89] found id: "29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32"
	I1020 13:10:40.911550  438594 cri.go:89] found id: "7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa"
	I1020 13:10:40.911555  438594 cri.go:89] found id: "4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef"
	I1020 13:10:40.911559  438594 cri.go:89] found id: "1cab805d87ac6945b1702fdba35f863d0eab965d77251ea63b6877b102faefc9"
	I1020 13:10:40.911563  438594 cri.go:89] found id: "fdcefaee03154ff132395152aa895f9040b0c73ea3c4489fcff9c0d96c5ccfdf"
	I1020 13:10:40.911567  438594 cri.go:89] found id: "8cce483217954951ae86feef813e26cc38a286c0b7682a2b390450e5f79b1405"
	I1020 13:10:40.911570  438594 cri.go:89] found id: "b156159ae41104e20ecf243e358f57a6b324c915fc74990fce030e1be3206013"
	I1020 13:10:40.911573  438594 cri.go:89] found id: "39011d72401dcb55eb79e40b61f09e4f72691eb0d5f5693ff57d74a14f3d0718"
	I1020 13:10:40.911577  438594 cri.go:89] found id: "6f5a1501f1b448340bc0ee77a84c6377aba8c6891b0578c346a3bc68650bff81"
	I1020 13:10:40.911583  438594 cri.go:89] found id: "a9e3751b02c02af3363adeea01cd46ab475f5353613ab3c7a2dbd4aaa67ce58e"
	I1020 13:10:40.911587  438594 cri.go:89] found id: "e7447a699d78349eb0e4959ec142ecd1cecd256c92e233f6777cff9cd3437931"
	I1020 13:10:40.911590  438594 cri.go:89] found id: "af0fb8c09a9f53a737646b19da0465322c52fc0d464cd81be947d0208be197e5"
	I1020 13:10:40.911593  438594 cri.go:89] found id: "f1a5586b1fcb4d600daf022adc9eba64e1f25ffc5eb78d36bc7acd7bae7a4bd0"
	I1020 13:10:40.911597  438594 cri.go:89] found id: "e80f970d8b4aad9cbd724b45750c6a4fcef45515362248fb7547d0f931dc4e3f"
	I1020 13:10:40.911600  438594 cri.go:89] found id: ""
	I1020 13:10:40.911650  438594 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:10:40.928964  438594 retry.go:31] will retry after 255.181414ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:10:40Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:10:41.184360  438594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:10:41.209393  438594 pause.go:52] kubelet running: false
	I1020 13:10:41.209511  438594 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:10:41.418274  438594 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:10:41.418446  438594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:10:41.529756  438594 cri.go:89] found id: "29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32"
	I1020 13:10:41.529836  438594 cri.go:89] found id: "7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa"
	I1020 13:10:41.529858  438594 cri.go:89] found id: "4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef"
	I1020 13:10:41.529880  438594 cri.go:89] found id: "1cab805d87ac6945b1702fdba35f863d0eab965d77251ea63b6877b102faefc9"
	I1020 13:10:41.529915  438594 cri.go:89] found id: "fdcefaee03154ff132395152aa895f9040b0c73ea3c4489fcff9c0d96c5ccfdf"
	I1020 13:10:41.529939  438594 cri.go:89] found id: "8cce483217954951ae86feef813e26cc38a286c0b7682a2b390450e5f79b1405"
	I1020 13:10:41.529973  438594 cri.go:89] found id: "b156159ae41104e20ecf243e358f57a6b324c915fc74990fce030e1be3206013"
	I1020 13:10:41.529991  438594 cri.go:89] found id: "39011d72401dcb55eb79e40b61f09e4f72691eb0d5f5693ff57d74a14f3d0718"
	I1020 13:10:41.530034  438594 cri.go:89] found id: "6f5a1501f1b448340bc0ee77a84c6377aba8c6891b0578c346a3bc68650bff81"
	I1020 13:10:41.530061  438594 cri.go:89] found id: "a9e3751b02c02af3363adeea01cd46ab475f5353613ab3c7a2dbd4aaa67ce58e"
	I1020 13:10:41.530093  438594 cri.go:89] found id: "e7447a699d78349eb0e4959ec142ecd1cecd256c92e233f6777cff9cd3437931"
	I1020 13:10:41.530125  438594 cri.go:89] found id: "af0fb8c09a9f53a737646b19da0465322c52fc0d464cd81be947d0208be197e5"
	I1020 13:10:41.530147  438594 cri.go:89] found id: "f1a5586b1fcb4d600daf022adc9eba64e1f25ffc5eb78d36bc7acd7bae7a4bd0"
	I1020 13:10:41.530167  438594 cri.go:89] found id: "e80f970d8b4aad9cbd724b45750c6a4fcef45515362248fb7547d0f931dc4e3f"
	I1020 13:10:41.530187  438594 cri.go:89] found id: ""
	I1020 13:10:41.530279  438594 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:10:41.542823  438594 retry.go:31] will retry after 227.56761ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:10:41Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:10:41.771301  438594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:10:41.787699  438594 pause.go:52] kubelet running: false
	I1020 13:10:41.787778  438594 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:10:41.966242  438594 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:10:41.966339  438594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:10:42.075935  438594 cri.go:89] found id: "29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32"
	I1020 13:10:42.075982  438594 cri.go:89] found id: "7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa"
	I1020 13:10:42.075989  438594 cri.go:89] found id: "4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef"
	I1020 13:10:42.075996  438594 cri.go:89] found id: "1cab805d87ac6945b1702fdba35f863d0eab965d77251ea63b6877b102faefc9"
	I1020 13:10:42.076001  438594 cri.go:89] found id: "fdcefaee03154ff132395152aa895f9040b0c73ea3c4489fcff9c0d96c5ccfdf"
	I1020 13:10:42.076005  438594 cri.go:89] found id: "8cce483217954951ae86feef813e26cc38a286c0b7682a2b390450e5f79b1405"
	I1020 13:10:42.076008  438594 cri.go:89] found id: "b156159ae41104e20ecf243e358f57a6b324c915fc74990fce030e1be3206013"
	I1020 13:10:42.076011  438594 cri.go:89] found id: "39011d72401dcb55eb79e40b61f09e4f72691eb0d5f5693ff57d74a14f3d0718"
	I1020 13:10:42.076015  438594 cri.go:89] found id: "6f5a1501f1b448340bc0ee77a84c6377aba8c6891b0578c346a3bc68650bff81"
	I1020 13:10:42.076039  438594 cri.go:89] found id: "a9e3751b02c02af3363adeea01cd46ab475f5353613ab3c7a2dbd4aaa67ce58e"
	I1020 13:10:42.076047  438594 cri.go:89] found id: "e7447a699d78349eb0e4959ec142ecd1cecd256c92e233f6777cff9cd3437931"
	I1020 13:10:42.076051  438594 cri.go:89] found id: "af0fb8c09a9f53a737646b19da0465322c52fc0d464cd81be947d0208be197e5"
	I1020 13:10:42.076055  438594 cri.go:89] found id: "f1a5586b1fcb4d600daf022adc9eba64e1f25ffc5eb78d36bc7acd7bae7a4bd0"
	I1020 13:10:42.076061  438594 cri.go:89] found id: "e80f970d8b4aad9cbd724b45750c6a4fcef45515362248fb7547d0f931dc4e3f"
	I1020 13:10:42.076070  438594 cri.go:89] found id: ""
	I1020 13:10:42.076145  438594 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:10:42.099983  438594 out.go:203] 
	W1020 13:10:42.104543  438594 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:10:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:10:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 13:10:42.104575  438594 out.go:285] * 
	* 
	W1020 13:10:42.115081  438594 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 13:10:42.119306  438594 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-255950 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-255950
helpers_test.go:243: (dbg) docker inspect pause-255950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b",
	        "Created": "2025-10-20T13:09:24.7006455Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 429336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:09:24.787824701Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/hosts",
	        "LogPath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b-json.log",
	        "Name": "/pause-255950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-255950:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-255950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b",
	                "LowerDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-255950",
	                "Source": "/var/lib/docker/volumes/pause-255950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-255950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-255950",
	                "name.minikube.sigs.k8s.io": "pause-255950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "989cc4cc3b1a9a1c0aad9e5578816b9c18cb5c91ae816b722eda5d5d0e8413b8",
	            "SandboxKey": "/var/run/docker/netns/989cc4cc3b1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33343"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33344"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33345"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33346"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-255950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:c1:e9:74:96:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e133b701f275efe85a9fe246ed8840cdea79bffe1760fac2f859c0978630d83e",
	                    "EndpointID": "df87b02a0728ce933fe32ff930260f1a4aad8861bb219d643586da27e99bac97",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-255950",
	                        "41b7259a2fc1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-255950 -n pause-255950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-255950 -n pause-255950: exit status 2 (505.770118ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-255950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-255950 logs -n 25: (1.996599224s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-608880 --schedule 5m                                                                                │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --cancel-scheduled                                                                           │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │ 20 Oct 25 13:07 UTC │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │ 20 Oct 25 13:08 UTC │
	│ delete  │ -p scheduled-stop-608880                                                                                              │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │ 20 Oct 25 13:09 UTC │
	│ start   │ -p insufficient-storage-255510 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-255510 │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │                     │
	│ delete  │ -p insufficient-storage-255510                                                                                        │ insufficient-storage-255510 │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │ 20 Oct 25 13:09 UTC │
	│ start   │ -p pause-255950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-255950                │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p NoKubernetes-820821 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │                     │
	│ start   │ -p NoKubernetes-820821 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │ 20 Oct 25 13:09 UTC │
	│ start   │ -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ delete  │ -p NoKubernetes-820821                                                                                                │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ ssh     │ -p NoKubernetes-820821 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │                     │
	│ start   │ -p pause-255950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-255950                │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ stop    │ -p NoKubernetes-820821                                                                                                │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p NoKubernetes-820821 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ ssh     │ -p NoKubernetes-820821 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │                     │
	│ delete  │ -p NoKubernetes-820821                                                                                                │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p missing-upgrade-507750 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-507750      │ jenkins │ v1.32.0 │ 20 Oct 25 13:10 UTC │                     │
	│ pause   │ -p pause-255950 --alsologtostderr -v=5                                                                                │ pause-255950                │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:10:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:10:30.615671  438347 out.go:296] Setting OutFile to fd 1 ...
	I1020 13:10:30.615841  438347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1020 13:10:30.615845  438347 out.go:309] Setting ErrFile to fd 2...
	I1020 13:10:30.615850  438347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1020 13:10:30.616098  438347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:10:30.616526  438347 out.go:303] Setting JSON to false
	I1020 13:10:30.617441  438347 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10381,"bootTime":1760955450,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:10:30.617504  438347 start.go:138] virtualization:  
	I1020 13:10:30.625104  438347 out.go:177] * [missing-upgrade-507750] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1020 13:10:30.628449  438347 out.go:177]   - MINIKUBE_LOCATION=21773
	I1020 13:10:30.628407  438347 notify.go:220] Checking for updates...
	I1020 13:10:30.631569  438347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:10:30.634547  438347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:10:30.637553  438347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:10:30.640394  438347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:10:30.643680  438347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:10:30.647049  438347 config.go:182] Loaded profile config "pause-255950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:10:30.647124  438347 driver.go:378] Setting default libvirt URI to qemu:///system
	I1020 13:10:30.680520  438347 docker.go:122] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:10:30.680618  438347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:10:30.766142  438347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/last_update_check: {Name:mk3ce886fb63584532d5ebe1a44e2db12b224504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:10:30.772430  438347 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1020 13:10:30.775635  438347 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1020 13:10:30.820858  438347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:10:30.805114642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:10:30.820974  438347 docker.go:295] overlay module found
	I1020 13:10:30.824226  438347 out.go:177] * Using the docker driver based on user configuration
	I1020 13:10:30.827107  438347 start.go:298] selected driver: docker
	I1020 13:10:30.827118  438347 start.go:902] validating driver "docker" against <nil>
	I1020 13:10:30.827129  438347 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:10:30.827761  438347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:10:30.924078  438347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:10:30.915210253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:10:30.924225  438347 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1020 13:10:30.924466  438347 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 13:10:30.929819  438347 out.go:177] * Using Docker driver with root privileges
	I1020 13:10:30.932739  438347 cni.go:84] Creating CNI manager for ""
	I1020 13:10:30.932752  438347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:10:30.932763  438347 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:10:30.932774  438347 start_flags.go:323] config:
	{Name:missing-upgrade-507750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-507750 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1020 13:10:30.935792  438347 out.go:177] * Starting control plane node missing-upgrade-507750 in cluster missing-upgrade-507750
	I1020 13:10:30.938590  438347 cache.go:121] Beginning downloading kic base image for docker with crio
	I1020 13:10:30.941403  438347 out.go:177] * Pulling base image ...
	I1020 13:10:30.944249  438347 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1020 13:10:30.944431  438347 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1020 13:10:30.974025  438347 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1020 13:10:30.974210  438347 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1020 13:10:30.974242  438347 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1020 13:10:31.000555  438347 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1020 13:10:31.000569  438347 cache.go:56] Caching tarball of preloaded images
	I1020 13:10:31.000709  438347 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1020 13:10:31.003883  438347 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1020 13:10:29.816106  435908 addons.go:514] duration metric: took 8.151193ms for enable addons: enabled=[]
	I1020 13:10:29.816179  435908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:10:30.056694  435908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:10:30.090622  435908 node_ready.go:35] waiting up to 6m0s for node "pause-255950" to be "Ready" ...
	I1020 13:10:31.006777  438347 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1020 13:10:31.091915  438347 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1020 13:10:35.473681  435908 node_ready.go:49] node "pause-255950" is "Ready"
	I1020 13:10:35.473752  435908 node_ready.go:38] duration metric: took 5.38309103s for node "pause-255950" to be "Ready" ...
	I1020 13:10:35.473780  435908 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:10:35.473869  435908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:10:35.536809  435908 api_server.go:72] duration metric: took 5.729193664s to wait for apiserver process to appear ...
	I1020 13:10:35.536879  435908 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:10:35.536924  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:35.568721  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 13:10:35.568804  435908 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 13:10:36.037370  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:36.056559  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:10:36.056665  435908 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:10:36.537362  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:36.567866  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:10:36.567907  435908 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:10:37.040419  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:37.055430  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:10:37.057521  435908 api_server.go:141] control plane version: v1.34.1
	I1020 13:10:37.057549  435908 api_server.go:131] duration metric: took 1.52064987s to wait for apiserver health ...
	I1020 13:10:37.057559  435908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:10:37.066503  435908 system_pods.go:59] 7 kube-system pods found
	I1020 13:10:37.066535  435908 system_pods.go:61] "coredns-66bc5c9577-d684w" [2b94f928-d688-4568-be9a-29d7454bb47f] Running
	I1020 13:10:37.066541  435908 system_pods.go:61] "etcd-pause-255950" [9f7c7106-0f1a-4118-a541-af5c48074753] Running
	I1020 13:10:37.066546  435908 system_pods.go:61] "kindnet-n2h9x" [5e0b3867-09d0-4927-a4de-ed6cd6d71d55] Running
	I1020 13:10:37.066551  435908 system_pods.go:61] "kube-apiserver-pause-255950" [8138aa55-bff9-4678-82bf-72db9097daaa] Running
	I1020 13:10:37.066555  435908 system_pods.go:61] "kube-controller-manager-pause-255950" [2316bbf5-0f02-4de3-b514-8f197b84311b] Running
	I1020 13:10:37.066559  435908 system_pods.go:61] "kube-proxy-k82rb" [4ac60a08-1196-4c72-9182-1503a7f8d38e] Running
	I1020 13:10:37.066564  435908 system_pods.go:61] "kube-scheduler-pause-255950" [5b85b699-5a1c-4259-91b2-f3722e6d86fe] Running
	I1020 13:10:37.066570  435908 system_pods.go:74] duration metric: took 9.005561ms to wait for pod list to return data ...
	I1020 13:10:37.066585  435908 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:10:37.070238  435908 default_sa.go:45] found service account: "default"
	I1020 13:10:37.070261  435908 default_sa.go:55] duration metric: took 3.66888ms for default service account to be created ...
	I1020 13:10:37.070271  435908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:10:37.074371  435908 system_pods.go:86] 7 kube-system pods found
	I1020 13:10:37.074460  435908 system_pods.go:89] "coredns-66bc5c9577-d684w" [2b94f928-d688-4568-be9a-29d7454bb47f] Running
	I1020 13:10:37.074484  435908 system_pods.go:89] "etcd-pause-255950" [9f7c7106-0f1a-4118-a541-af5c48074753] Running
	I1020 13:10:37.074524  435908 system_pods.go:89] "kindnet-n2h9x" [5e0b3867-09d0-4927-a4de-ed6cd6d71d55] Running
	I1020 13:10:37.074547  435908 system_pods.go:89] "kube-apiserver-pause-255950" [8138aa55-bff9-4678-82bf-72db9097daaa] Running
	I1020 13:10:37.074572  435908 system_pods.go:89] "kube-controller-manager-pause-255950" [2316bbf5-0f02-4de3-b514-8f197b84311b] Running
	I1020 13:10:37.074592  435908 system_pods.go:89] "kube-proxy-k82rb" [4ac60a08-1196-4c72-9182-1503a7f8d38e] Running
	I1020 13:10:37.074629  435908 system_pods.go:89] "kube-scheduler-pause-255950" [5b85b699-5a1c-4259-91b2-f3722e6d86fe] Running
	I1020 13:10:37.074655  435908 system_pods.go:126] duration metric: took 4.377861ms to wait for k8s-apps to be running ...
	I1020 13:10:37.074678  435908 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:10:37.074768  435908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:10:37.131797  435908 system_svc.go:56] duration metric: took 57.10957ms WaitForService to wait for kubelet
	I1020 13:10:37.131869  435908 kubeadm.go:586] duration metric: took 7.324258577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:10:37.131920  435908 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:10:37.136022  435908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:10:37.136111  435908 node_conditions.go:123] node cpu capacity is 2
	I1020 13:10:37.136139  435908 node_conditions.go:105] duration metric: took 4.186999ms to run NodePressure ...
	I1020 13:10:37.136182  435908 start.go:241] waiting for startup goroutines ...
	I1020 13:10:37.136208  435908 start.go:246] waiting for cluster config update ...
	I1020 13:10:37.136243  435908 start.go:255] writing updated cluster config ...
	I1020 13:10:37.136631  435908 ssh_runner.go:195] Run: rm -f paused
	I1020 13:10:37.141026  435908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:10:37.141706  435908 kapi.go:59] client config for pause-255950: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-296391/.minikube/profiles/pause-255950/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-296391/.minikube/profiles/pause-255950/client.key", CAFile:"/home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 13:10:37.146068  435908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d684w" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.153267  435908 pod_ready.go:94] pod "coredns-66bc5c9577-d684w" is "Ready"
	I1020 13:10:37.153350  435908 pod_ready.go:86] duration metric: took 7.211795ms for pod "coredns-66bc5c9577-d684w" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.158298  435908 pod_ready.go:83] waiting for pod "etcd-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.166281  435908 pod_ready.go:94] pod "etcd-pause-255950" is "Ready"
	I1020 13:10:37.166364  435908 pod_ready.go:86] duration metric: took 7.994564ms for pod "etcd-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.170868  435908 pod_ready.go:83] waiting for pod "kube-apiserver-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.181868  435908 pod_ready.go:94] pod "kube-apiserver-pause-255950" is "Ready"
	I1020 13:10:37.181949  435908 pod_ready.go:86] duration metric: took 11.012287ms for pod "kube-apiserver-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.185318  435908 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.546647  435908 pod_ready.go:94] pod "kube-controller-manager-pause-255950" is "Ready"
	I1020 13:10:37.546672  435908 pod_ready.go:86] duration metric: took 361.285613ms for pod "kube-controller-manager-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.747715  435908 pod_ready.go:83] waiting for pod "kube-proxy-k82rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:38.163336  435908 pod_ready.go:94] pod "kube-proxy-k82rb" is "Ready"
	I1020 13:10:38.163364  435908 pod_ready.go:86] duration metric: took 415.62571ms for pod "kube-proxy-k82rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:38.373099  435908 pod_ready.go:83] waiting for pod "kube-scheduler-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:39.945700  435908 pod_ready.go:94] pod "kube-scheduler-pause-255950" is "Ready"
	I1020 13:10:39.945736  435908 pod_ready.go:86] duration metric: took 1.572614409s for pod "kube-scheduler-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:39.945748  435908 pod_ready.go:40] duration metric: took 2.804641942s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:10:40.049854  435908 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:10:40.054653  435908 out.go:179] * Done! kubectl is now configured to use "pause-255950" cluster and "default" namespace by default
	I1020 13:10:36.819252  438347 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1020 13:10:36.819339  438347 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1020 13:10:37.437000  438347 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1020 13:10:37.437011  438347 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1020 13:10:38.707458  438347 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1020 13:10:38.707570  438347 profile.go:148] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/missing-upgrade-507750/config.json ...
	I1020 13:10:38.707597  438347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/missing-upgrade-507750/config.json: {Name:mk999f095b8d0d36401a3e23db33e4a12b1d4a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.464585672Z" level=info msg="Started container" PID=2249 containerID=1cab805d87ac6945b1702fdba35f863d0eab965d77251ea63b6877b102faefc9 description=kube-system/kindnet-n2h9x/kindnet-cni id=04f80090-a983-4e94-871c-39d6da16e654 name=/runtime.v1.RuntimeService/StartContainer sandboxID=748f15ad8183f2caec81e4ce0c8d217fb063e11c796b212b5f8d1f2c45629f8c
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.467300081Z" level=info msg="Created container 7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa: kube-system/coredns-66bc5c9577-d684w/coredns" id=e0099207-67ac-40fc-83dc-7170af762ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.476203126Z" level=info msg="Starting container: 7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa" id=532dd7ac-7b20-402a-ba5c-4fe9739139a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.510339737Z" level=info msg="Started container" PID=2266 containerID=7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa description=kube-system/coredns-66bc5c9577-d684w/coredns id=532dd7ac-7b20-402a-ba5c-4fe9739139a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa17083d7af9daf6229612fbceea07dfb09c584c0422498ad85829ee7a35e9e7
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.598723639Z" level=info msg="Created container 29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32: kube-system/etcd-pause-255950/etcd" id=a899bdd4-f4d2-4e53-9ad7-a5fc5ce567e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.5994125Z" level=info msg="Starting container: 29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32" id=4ef70bef-e660-4c18-a2c4-6a9032763d5a name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.604665389Z" level=info msg="Started container" PID=2306 containerID=29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32 description=kube-system/etcd-pause-255950/etcd id=4ef70bef-e660-4c18-a2c4-6a9032763d5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=91edbe1c6ea7097ae494ba7e28a59e5e94159a3de657bf49a69bc39d43290ff4
	Oct 20 13:10:29 pause-255950 crio[2098]: time="2025-10-20T13:10:29.340650867Z" level=info msg="Created container 4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef: kube-system/kube-proxy-k82rb/kube-proxy" id=8c46f413-c019-45e6-90dd-dee999d2a56f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:10:29 pause-255950 crio[2098]: time="2025-10-20T13:10:29.344861062Z" level=info msg="Starting container: 4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef" id=9e2f90ef-2a38-4860-a936-ef39fd677f7b name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:10:29 pause-255950 crio[2098]: time="2025-10-20T13:10:29.350456732Z" level=info msg="Started container" PID=2282 containerID=4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef description=kube-system/kube-proxy-k82rb/kube-proxy id=9e2f90ef-2a38-4860-a936-ef39fd677f7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0067948d99ab84bd8dc4e2bfe92243e581c5a68cabc93eb91940bd16f5c2b4b3
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.898739632Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.907219518Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.907287104Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.907308397Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.912036661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.913006624Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.913033513Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.920319958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.920356406Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.920451996Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.94585492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.946181151Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.946386815Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.954792132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.955053591Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	29e036fd5499f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   91edbe1c6ea70       etcd-pause-255950                      kube-system
	7f47d3906e4d6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   15 seconds ago      Running             coredns                   1                   aa17083d7af9d       coredns-66bc5c9577-d684w               kube-system
	4594ac9fd9b14       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   15 seconds ago      Running             kube-proxy                1                   0067948d99ab8       kube-proxy-k82rb                       kube-system
	1cab805d87ac6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   15 seconds ago      Running             kindnet-cni               1                   748f15ad8183f       kindnet-n2h9x                          kube-system
	fdcefaee03154       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            1                   178c199d7d7b1       kube-scheduler-pause-255950            kube-system
	8cce483217954       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   bc376a4cd9ae8       kube-apiserver-pause-255950            kube-system
	b156159ae4110       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   1                   0113520ddb798       kube-controller-manager-pause-255950   kube-system
	39011d72401dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   28 seconds ago      Exited              coredns                   0                   aa17083d7af9d       coredns-66bc5c9577-d684w               kube-system
	6f5a1501f1b44       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   39 seconds ago      Exited              kindnet-cni               0                   748f15ad8183f       kindnet-n2h9x                          kube-system
	a9e3751b02c02       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   39 seconds ago      Exited              kube-proxy                0                   0067948d99ab8       kube-proxy-k82rb                       kube-system
	e7447a699d783       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   55 seconds ago      Exited              kube-controller-manager   0                   0113520ddb798       kube-controller-manager-pause-255950   kube-system
	af0fb8c09a9f5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   55 seconds ago      Exited              etcd                      0                   91edbe1c6ea70       etcd-pause-255950                      kube-system
	f1a5586b1fcb4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   56 seconds ago      Exited              kube-apiserver            0                   bc376a4cd9ae8       kube-apiserver-pause-255950            kube-system
	e80f970d8b4aa       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   56 seconds ago      Exited              kube-scheduler            0                   178c199d7d7b1       kube-scheduler-pause-255950            kube-system
	
	
	==> coredns [39011d72401dcb55eb79e40b61f09e4f72691eb0d5f5693ff57d74a14f3d0718] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42666 - 59895 "HINFO IN 3938642766750719388.8342086583717484870. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024473909s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42673 - 31354 "HINFO IN 3457138896136666846.3717781881670930441. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043221584s
	
	
	==> describe nodes <==
	Name:               pause-255950
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-255950
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=pause-255950
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_09_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:09:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-255950
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:10:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:10:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-255950
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                bb68224a-25df-447e-8a61-227046f8809e
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-d684w                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     41s
	  kube-system                 etcd-pause-255950                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         45s
	  kube-system                 kindnet-n2h9x                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      41s
	  kube-system                 kube-apiserver-pause-255950             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-pause-255950    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-k82rb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-scheduler-pause-255950             100m (5%)     0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 39s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node pause-255950 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node pause-255950 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node pause-255950 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 46s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  45s                kubelet          Node pause-255950 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s                kubelet          Node pause-255950 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s                kubelet          Node pause-255950 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           42s                node-controller  Node pause-255950 event: Registered Node pause-255950 in Controller
	  Normal   NodeReady                29s                kubelet          Node pause-255950 status is now: NodeReady
	  Normal   RegisteredNode           4s                 node-controller  Node pause-255950 event: Registered Node pause-255950 in Controller
	
	
	==> dmesg <==
	[Oct20 12:44] overlayfs: idmapped layers are currently not supported
	[Oct20 12:45] overlayfs: idmapped layers are currently not supported
	[ +37.059511] overlayfs: idmapped layers are currently not supported
	[Oct20 12:46] overlayfs: idmapped layers are currently not supported
	[Oct20 12:47] overlayfs: idmapped layers are currently not supported
	[  +3.282483] overlayfs: idmapped layers are currently not supported
	[Oct20 12:49] overlayfs: idmapped layers are currently not supported
	[Oct20 12:50] overlayfs: idmapped layers are currently not supported
	[Oct20 12:51] overlayfs: idmapped layers are currently not supported
	[Oct20 12:56] overlayfs: idmapped layers are currently not supported
	[Oct20 12:57] overlayfs: idmapped layers are currently not supported
	[Oct20 12:58] overlayfs: idmapped layers are currently not supported
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32] <==
	{"level":"warn","ts":"2025-10-20T13:10:31.897800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:31.986354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.019273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.041485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.105984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.114009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.175891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.218733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.246470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.350709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.392846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.402642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.479908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.501482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.542649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.600266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.644998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.695062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.801511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.802620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.888792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.934584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:33.012485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:33.058897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:33.211366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	
	
	==> etcd [af0fb8c09a9f53a737646b19da0465322c52fc0d464cd81be947d0208be197e5] <==
	{"level":"warn","ts":"2025-10-20T13:09:51.849207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:51.893000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:51.950129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.005481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.044972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.081621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.237715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42830","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T13:10:20.264149Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-20T13:10:20.264208Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-255950","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-20T13:10:20.264350Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-20T13:10:20.421844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-20T13:10:20.422038Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T13:10:20.422103Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-20T13:10:20.422233Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-20T13:10:20.422282Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422520Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422580Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-20T13:10:20.422615Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422456Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422773Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-20T13:10:20.422811Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T13:10:20.426025Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-20T13:10:20.426175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T13:10:20.426245Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:10:20.426294Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-255950","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 13:10:43 up  2:53,  0 user,  load average: 6.16, 2.79, 1.99
	Linux pause-255950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cab805d87ac6945b1702fdba35f863d0eab965d77251ea63b6877b102faefc9] <==
	I1020 13:10:28.617542       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:10:28.626184       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:10:28.626408       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:10:28.626462       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:10:28.628747       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:10:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:10:28.899725       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:10:28.899749       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:10:28.899759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:10:28.900564       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:10:35.636839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:10:35.648609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:10:35.648948       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:10:35.649158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1020 13:10:36.799990       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:10:36.800038       1 metrics.go:72] Registering metrics
	I1020 13:10:36.800117       1 controller.go:711] "Syncing nftables rules"
	I1020 13:10:38.897940       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:10:38.898223       1 main.go:301] handling current node
	
	
	==> kindnet [6f5a1501f1b448340bc0ee77a84c6377aba8c6891b0578c346a3bc68650bff81] <==
	I1020 13:10:04.116421       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:10:04.116913       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:10:04.117059       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:10:04.117097       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:10:04.117136       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:10:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:10:04.315454       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:10:04.400699       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:10:04.400798       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:10:04.400956       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 13:10:04.601179       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:10:04.601204       1 metrics.go:72] Registering metrics
	I1020 13:10:04.601267       1 controller.go:711] "Syncing nftables rules"
	I1020 13:10:14.316625       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:10:14.316691       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8cce483217954951ae86feef813e26cc38a286c0b7682a2b390450e5f79b1405] <==
	I1020 13:10:35.690269       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1020 13:10:35.690530       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:10:35.690634       1 aggregator.go:171] initial CRD sync complete...
	I1020 13:10:35.690784       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 13:10:35.690826       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:10:35.690875       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:10:35.690702       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:10:35.708664       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:10:35.712438       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:10:35.712588       1 policy_source.go:240] refreshing policies
	I1020 13:10:35.732434       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 13:10:35.732582       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 13:10:35.745091       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 13:10:35.745181       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 13:10:35.745321       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 13:10:35.745559       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:10:35.745655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 13:10:35.752296       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:10:35.772527       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:10:35.851053       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:10:38.513369       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:10:39.530766       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:10:39.692041       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:10:39.743446       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:10:39.845994       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [f1a5586b1fcb4d600daf022adc9eba64e1f25ffc5eb78d36bc7acd7bae7a4bd0] <==
	W1020 13:10:20.316979       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317095       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317227       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317393       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317585       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317671       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317857       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.318310       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.318518       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.318766       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319025       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319207       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319440       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319574       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319733       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.316840       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319448       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.320133       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.320231       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.320342       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.316146       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321668       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321324       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321368       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321452       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b156159ae41104e20ecf243e358f57a6b324c915fc74990fce030e1be3206013] <==
	I1020 13:10:39.532545       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 13:10:39.534509       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 13:10:39.534642       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 13:10:39.534714       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 13:10:39.537041       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 13:10:39.537188       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:10:39.546002       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:10:39.546146       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:10:39.549517       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:10:39.549593       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:10:39.549603       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:10:39.553302       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:10:39.559162       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 13:10:39.561298       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 13:10:39.562201       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:10:39.570842       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:10:39.576811       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:10:39.586037       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:10:39.586216       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 13:10:39.586300       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:10:39.586491       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-255950"
	I1020 13:10:39.588014       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 13:10:39.593214       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:10:39.609558       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:10:39.609669       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-controller-manager [e7447a699d78349eb0e4959ec142ecd1cecd256c92e233f6777cff9cd3437931] <==
	I1020 13:10:01.695716       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:10:01.695737       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:10:01.695753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:10:01.701800       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 13:10:01.705328       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:10:01.706919       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-255950" podCIDRs=["10.244.0.0/24"]
	I1020 13:10:01.708768       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 13:10:01.715185       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 13:10:01.725695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:10:01.730053       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:10:01.731124       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:10:01.733430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:10:01.737596       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:10:01.738815       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 13:10:01.746247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:10:01.746338       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:10:01.746371       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:10:01.747924       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 13:10:01.751509       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:10:01.751676       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:10:01.751783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-255950"
	I1020 13:10:01.751854       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 13:10:01.765913       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 13:10:01.766458       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:10:16.754086       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef] <==
	I1020 13:10:31.241329       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:10:33.975460       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:10:35.677042       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:10:35.677147       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:10:35.677334       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:10:35.836654       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:10:35.836808       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:10:35.865172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:10:35.865591       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:10:35.865806       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:10:35.867475       1 config.go:200] "Starting service config controller"
	I1020 13:10:35.876670       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:10:35.876813       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:10:35.876957       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:10:35.877050       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:10:35.877080       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:10:35.878007       1 config.go:309] "Starting node config controller"
	I1020 13:10:35.878079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:10:35.878117       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:10:35.977524       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:10:35.977642       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:10:35.977719       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a9e3751b02c02af3363adeea01cd46ab475f5353613ab3c7a2dbd4aaa67ce58e] <==
	I1020 13:10:03.860084       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:10:03.945806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:10:04.053024       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:10:04.053067       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:10:04.053132       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:10:04.140316       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:10:04.140602       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:10:04.145320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:10:04.145689       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:10:04.145891       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:10:04.150652       1 config.go:200] "Starting service config controller"
	I1020 13:10:04.150732       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:10:04.150791       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:10:04.150820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:10:04.150856       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:10:04.150883       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:10:04.151782       1 config.go:309] "Starting node config controller"
	I1020 13:10:04.151846       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:10:04.151876       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:10:04.251190       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:10:04.251291       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:10:04.251321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e80f970d8b4aad9cbd724b45750c6a4fcef45515362248fb7547d0f931dc4e3f] <==
	I1020 13:09:54.293798       1 serving.go:386] Generated self-signed cert in-memory
	W1020 13:09:55.989268       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 13:09:55.989372       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 13:09:55.989407       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 13:09:55.989448       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 13:09:56.042991       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:09:56.043138       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:09:56.049783       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:09:56.052955       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:09:56.056916       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:09:56.057291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1020 13:09:56.072890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1020 13:09:57.556086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:20.265924       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1020 13:10:20.265950       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1020 13:10:20.265972       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1020 13:10:20.266017       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:20.266300       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1020 13:10:20.266347       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fdcefaee03154ff132395152aa895f9040b0c73ea3c4489fcff9c0d96c5ccfdf] <==
	I1020 13:10:31.804345       1 serving.go:386] Generated self-signed cert in-memory
	I1020 13:10:37.396861       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:10:37.396978       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:10:37.409350       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 13:10:37.409470       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 13:10:37.409568       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:37.409619       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:37.409671       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:10:37.409704       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:10:37.411256       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:10:37.411347       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:10:37.510382       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:10:37.510522       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 13:10:37.510667       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:10:28 pause-255950 kubelet[1310]: E1020 13:10:28.487444    1310 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-255950?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 20 13:10:28 pause-255950 kubelet[1310]: E1020 13:10:28.487666    1310 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-255950?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 20 13:10:28 pause-255950 kubelet[1310]: I1020 13:10:28.487695    1310 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Oct 20 13:10:28 pause-255950 kubelet[1310]: E1020 13:10:28.487873    1310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-255950?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="200ms"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.395355    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="30fbb281a3b7e493dc880969a561a086" pod="kube-system/kube-scheduler-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.396196    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-255950\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.396400    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-255950\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.446663    1310 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-255950\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.481427    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="7b593a3ef6240d38d6838ae48ab7be19" pod="kube-system/etcd-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.497363    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="be674be594d729bffaa7ea79994926a9" pod="kube-system/kube-apiserver-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.508663    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="d5a722a848b15da94e63b6ad8776c9c0" pod="kube-system/kube-controller-manager-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.513080    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-n2h9x\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="5e0b3867-09d0-4927-a4de-ed6cd6d71d55" pod="kube-system/kindnet-n2h9x"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.520666    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-k82rb\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="4ac60a08-1196-4c72-9182-1503a7f8d38e" pod="kube-system/kube-proxy-k82rb"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.522135    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-d684w\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="2b94f928-d688-4568-be9a-29d7454bb47f" pod="kube-system/coredns-66bc5c9577-d684w"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.524677    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="7b593a3ef6240d38d6838ae48ab7be19" pod="kube-system/etcd-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.533021    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="be674be594d729bffaa7ea79994926a9" pod="kube-system/kube-apiserver-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.541137    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="d5a722a848b15da94e63b6ad8776c9c0" pod="kube-system/kube-controller-manager-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.556655    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-n2h9x\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="5e0b3867-09d0-4927-a4de-ed6cd6d71d55" pod="kube-system/kindnet-n2h9x"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.564119    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-k82rb\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="4ac60a08-1196-4c72-9182-1503a7f8d38e" pod="kube-system/kube-proxy-k82rb"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.575423    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-d684w\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="2b94f928-d688-4568-be9a-29d7454bb47f" pod="kube-system/coredns-66bc5c9577-d684w"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.577269    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="30fbb281a3b7e493dc880969a561a086" pod="kube-system/kube-scheduler-pause-255950"
	Oct 20 13:10:38 pause-255950 kubelet[1310]: W1020 13:10:38.280339    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 20 13:10:40 pause-255950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:10:40 pause-255950 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:10:40 pause-255950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-255950 -n pause-255950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-255950 -n pause-255950: exit status 2 (568.913912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-255950 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-255950
helpers_test.go:243: (dbg) docker inspect pause-255950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b",
	        "Created": "2025-10-20T13:09:24.7006455Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 429336,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:09:24.787824701Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/hostname",
	        "HostsPath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/hosts",
	        "LogPath": "/var/lib/docker/containers/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b/41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b-json.log",
	        "Name": "/pause-255950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-255950:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-255950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41b7259a2fc10cc747e2d1ae809199f99a77c9d75f0fa798280770aa7089ec1b",
	                "LowerDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19/merged",
	                "UpperDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19/diff",
	                "WorkDir": "/var/lib/docker/overlay2/657a871bdc57d48b700ddf42dd5906c59bbfa649a290a89d0f269adb4fe6cb19/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-255950",
	                "Source": "/var/lib/docker/volumes/pause-255950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-255950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-255950",
	                "name.minikube.sigs.k8s.io": "pause-255950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "989cc4cc3b1a9a1c0aad9e5578816b9c18cb5c91ae816b722eda5d5d0e8413b8",
	            "SandboxKey": "/var/run/docker/netns/989cc4cc3b1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33343"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33344"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33347"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33345"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33346"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-255950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:c1:e9:74:96:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e133b701f275efe85a9fe246ed8840cdea79bffe1760fac2f859c0978630d83e",
	                    "EndpointID": "df87b02a0728ce933fe32ff930260f1a4aad8861bb219d643586da27e99bac97",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-255950",
	                        "41b7259a2fc1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-255950 -n pause-255950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-255950 -n pause-255950: exit status 2 (463.302313ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-255950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-255950 logs -n 25: (1.710075658s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-608880 --schedule 5m                                                                                │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --cancel-scheduled                                                                           │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:07 UTC │ 20 Oct 25 13:07 UTC │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │                     │
	│ stop    │ -p scheduled-stop-608880 --schedule 15s                                                                               │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │ 20 Oct 25 13:08 UTC │
	│ delete  │ -p scheduled-stop-608880                                                                                              │ scheduled-stop-608880       │ jenkins │ v1.37.0 │ 20 Oct 25 13:08 UTC │ 20 Oct 25 13:09 UTC │
	│ start   │ -p insufficient-storage-255510 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-255510 │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │                     │
	│ delete  │ -p insufficient-storage-255510                                                                                        │ insufficient-storage-255510 │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │ 20 Oct 25 13:09 UTC │
	│ start   │ -p pause-255950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-255950                │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p NoKubernetes-820821 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │                     │
	│ start   │ -p NoKubernetes-820821 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:09 UTC │ 20 Oct 25 13:09 UTC │
	│ start   │ -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ delete  │ -p NoKubernetes-820821                                                                                                │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ ssh     │ -p NoKubernetes-820821 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │                     │
	│ start   │ -p pause-255950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-255950                │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ stop    │ -p NoKubernetes-820821                                                                                                │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p NoKubernetes-820821 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ ssh     │ -p NoKubernetes-820821 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │                     │
	│ delete  │ -p NoKubernetes-820821                                                                                                │ NoKubernetes-820821         │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │ 20 Oct 25 13:10 UTC │
	│ start   │ -p missing-upgrade-507750 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-507750      │ jenkins │ v1.32.0 │ 20 Oct 25 13:10 UTC │                     │
	│ pause   │ -p pause-255950 --alsologtostderr -v=5                                                                                │ pause-255950                │ jenkins │ v1.37.0 │ 20 Oct 25 13:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:10:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:10:30.615671  438347 out.go:296] Setting OutFile to fd 1 ...
	I1020 13:10:30.615841  438347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1020 13:10:30.615845  438347 out.go:309] Setting ErrFile to fd 2...
	I1020 13:10:30.615850  438347 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1020 13:10:30.616098  438347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:10:30.616526  438347 out.go:303] Setting JSON to false
	I1020 13:10:30.617441  438347 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10381,"bootTime":1760955450,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:10:30.617504  438347 start.go:138] virtualization:  
	I1020 13:10:30.625104  438347 out.go:177] * [missing-upgrade-507750] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1020 13:10:30.628449  438347 out.go:177]   - MINIKUBE_LOCATION=21773
	I1020 13:10:30.628407  438347 notify.go:220] Checking for updates...
	I1020 13:10:30.631569  438347 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:10:30.634547  438347 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:10:30.637553  438347 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:10:30.640394  438347 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:10:30.643680  438347 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:10:30.647049  438347 config.go:182] Loaded profile config "pause-255950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:10:30.647124  438347 driver.go:378] Setting default libvirt URI to qemu:///system
	I1020 13:10:30.680520  438347 docker.go:122] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:10:30.680618  438347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:10:30.766142  438347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/last_update_check: {Name:mk3ce886fb63584532d5ebe1a44e2db12b224504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:10:30.772430  438347 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1020 13:10:30.775635  438347 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1020 13:10:30.820858  438347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:10:30.805114642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:10:30.820974  438347 docker.go:295] overlay module found
	I1020 13:10:30.824226  438347 out.go:177] * Using the docker driver based on user configuration
	I1020 13:10:30.827107  438347 start.go:298] selected driver: docker
	I1020 13:10:30.827118  438347 start.go:902] validating driver "docker" against <nil>
	I1020 13:10:30.827129  438347 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:10:30.827761  438347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:10:30.924078  438347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:10:30.915210253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:10:30.924225  438347 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1020 13:10:30.924466  438347 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 13:10:30.929819  438347 out.go:177] * Using Docker driver with root privileges
	I1020 13:10:30.932739  438347 cni.go:84] Creating CNI manager for ""
	I1020 13:10:30.932752  438347 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:10:30.932763  438347 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:10:30.932774  438347 start_flags.go:323] config:
	{Name:missing-upgrade-507750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-507750 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1020 13:10:30.935792  438347 out.go:177] * Starting control plane node missing-upgrade-507750 in cluster missing-upgrade-507750
	I1020 13:10:30.938590  438347 cache.go:121] Beginning downloading kic base image for docker with crio
	I1020 13:10:30.941403  438347 out.go:177] * Pulling base image ...
	I1020 13:10:30.944249  438347 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1020 13:10:30.944431  438347 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1020 13:10:30.974025  438347 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1020 13:10:30.974210  438347 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1020 13:10:30.974242  438347 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1020 13:10:31.000555  438347 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1020 13:10:31.000569  438347 cache.go:56] Caching tarball of preloaded images
	I1020 13:10:31.000709  438347 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1020 13:10:31.003883  438347 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1020 13:10:29.816106  435908 addons.go:514] duration metric: took 8.151193ms for enable addons: enabled=[]
	I1020 13:10:29.816179  435908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:10:30.056694  435908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:10:30.090622  435908 node_ready.go:35] waiting up to 6m0s for node "pause-255950" to be "Ready" ...
	I1020 13:10:31.006777  438347 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1020 13:10:31.091915  438347 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1020 13:10:35.473681  435908 node_ready.go:49] node "pause-255950" is "Ready"
	I1020 13:10:35.473752  435908 node_ready.go:38] duration metric: took 5.38309103s for node "pause-255950" to be "Ready" ...
	I1020 13:10:35.473780  435908 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:10:35.473869  435908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:10:35.536809  435908 api_server.go:72] duration metric: took 5.729193664s to wait for apiserver process to appear ...
	I1020 13:10:35.536879  435908 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:10:35.536924  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:35.568721  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 13:10:35.568804  435908 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 13:10:36.037370  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:36.056559  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:10:36.056665  435908 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:10:36.537362  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:36.567866  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:10:36.567907  435908 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:10:37.040419  435908 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:10:37.055430  435908 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:10:37.057521  435908 api_server.go:141] control plane version: v1.34.1
	I1020 13:10:37.057549  435908 api_server.go:131] duration metric: took 1.52064987s to wait for apiserver health ...
	I1020 13:10:37.057559  435908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:10:37.066503  435908 system_pods.go:59] 7 kube-system pods found
	I1020 13:10:37.066535  435908 system_pods.go:61] "coredns-66bc5c9577-d684w" [2b94f928-d688-4568-be9a-29d7454bb47f] Running
	I1020 13:10:37.066541  435908 system_pods.go:61] "etcd-pause-255950" [9f7c7106-0f1a-4118-a541-af5c48074753] Running
	I1020 13:10:37.066546  435908 system_pods.go:61] "kindnet-n2h9x" [5e0b3867-09d0-4927-a4de-ed6cd6d71d55] Running
	I1020 13:10:37.066551  435908 system_pods.go:61] "kube-apiserver-pause-255950" [8138aa55-bff9-4678-82bf-72db9097daaa] Running
	I1020 13:10:37.066555  435908 system_pods.go:61] "kube-controller-manager-pause-255950" [2316bbf5-0f02-4de3-b514-8f197b84311b] Running
	I1020 13:10:37.066559  435908 system_pods.go:61] "kube-proxy-k82rb" [4ac60a08-1196-4c72-9182-1503a7f8d38e] Running
	I1020 13:10:37.066564  435908 system_pods.go:61] "kube-scheduler-pause-255950" [5b85b699-5a1c-4259-91b2-f3722e6d86fe] Running
	I1020 13:10:37.066570  435908 system_pods.go:74] duration metric: took 9.005561ms to wait for pod list to return data ...
	I1020 13:10:37.066585  435908 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:10:37.070238  435908 default_sa.go:45] found service account: "default"
	I1020 13:10:37.070261  435908 default_sa.go:55] duration metric: took 3.66888ms for default service account to be created ...
	I1020 13:10:37.070271  435908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:10:37.074371  435908 system_pods.go:86] 7 kube-system pods found
	I1020 13:10:37.074460  435908 system_pods.go:89] "coredns-66bc5c9577-d684w" [2b94f928-d688-4568-be9a-29d7454bb47f] Running
	I1020 13:10:37.074484  435908 system_pods.go:89] "etcd-pause-255950" [9f7c7106-0f1a-4118-a541-af5c48074753] Running
	I1020 13:10:37.074524  435908 system_pods.go:89] "kindnet-n2h9x" [5e0b3867-09d0-4927-a4de-ed6cd6d71d55] Running
	I1020 13:10:37.074547  435908 system_pods.go:89] "kube-apiserver-pause-255950" [8138aa55-bff9-4678-82bf-72db9097daaa] Running
	I1020 13:10:37.074572  435908 system_pods.go:89] "kube-controller-manager-pause-255950" [2316bbf5-0f02-4de3-b514-8f197b84311b] Running
	I1020 13:10:37.074592  435908 system_pods.go:89] "kube-proxy-k82rb" [4ac60a08-1196-4c72-9182-1503a7f8d38e] Running
	I1020 13:10:37.074629  435908 system_pods.go:89] "kube-scheduler-pause-255950" [5b85b699-5a1c-4259-91b2-f3722e6d86fe] Running
	I1020 13:10:37.074655  435908 system_pods.go:126] duration metric: took 4.377861ms to wait for k8s-apps to be running ...
	I1020 13:10:37.074678  435908 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:10:37.074768  435908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:10:37.131797  435908 system_svc.go:56] duration metric: took 57.10957ms WaitForService to wait for kubelet
	I1020 13:10:37.131869  435908 kubeadm.go:586] duration metric: took 7.324258577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:10:37.131920  435908 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:10:37.136022  435908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:10:37.136111  435908 node_conditions.go:123] node cpu capacity is 2
	I1020 13:10:37.136139  435908 node_conditions.go:105] duration metric: took 4.186999ms to run NodePressure ...
	I1020 13:10:37.136182  435908 start.go:241] waiting for startup goroutines ...
	I1020 13:10:37.136208  435908 start.go:246] waiting for cluster config update ...
	I1020 13:10:37.136243  435908 start.go:255] writing updated cluster config ...
	I1020 13:10:37.136631  435908 ssh_runner.go:195] Run: rm -f paused
	I1020 13:10:37.141026  435908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:10:37.141706  435908 kapi.go:59] client config for pause-255950: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21773-296391/.minikube/profiles/pause-255950/client.crt", KeyFile:"/home/jenkins/minikube-integration/21773-296391/.minikube/profiles/pause-255950/client.key", CAFile:"/home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1020 13:10:37.146068  435908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d684w" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.153267  435908 pod_ready.go:94] pod "coredns-66bc5c9577-d684w" is "Ready"
	I1020 13:10:37.153350  435908 pod_ready.go:86] duration metric: took 7.211795ms for pod "coredns-66bc5c9577-d684w" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.158298  435908 pod_ready.go:83] waiting for pod "etcd-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.166281  435908 pod_ready.go:94] pod "etcd-pause-255950" is "Ready"
	I1020 13:10:37.166364  435908 pod_ready.go:86] duration metric: took 7.994564ms for pod "etcd-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.170868  435908 pod_ready.go:83] waiting for pod "kube-apiserver-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.181868  435908 pod_ready.go:94] pod "kube-apiserver-pause-255950" is "Ready"
	I1020 13:10:37.181949  435908 pod_ready.go:86] duration metric: took 11.012287ms for pod "kube-apiserver-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.185318  435908 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.546647  435908 pod_ready.go:94] pod "kube-controller-manager-pause-255950" is "Ready"
	I1020 13:10:37.546672  435908 pod_ready.go:86] duration metric: took 361.285613ms for pod "kube-controller-manager-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:37.747715  435908 pod_ready.go:83] waiting for pod "kube-proxy-k82rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:38.163336  435908 pod_ready.go:94] pod "kube-proxy-k82rb" is "Ready"
	I1020 13:10:38.163364  435908 pod_ready.go:86] duration metric: took 415.62571ms for pod "kube-proxy-k82rb" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:38.373099  435908 pod_ready.go:83] waiting for pod "kube-scheduler-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:39.945700  435908 pod_ready.go:94] pod "kube-scheduler-pause-255950" is "Ready"
	I1020 13:10:39.945736  435908 pod_ready.go:86] duration metric: took 1.572614409s for pod "kube-scheduler-pause-255950" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:10:39.945748  435908 pod_ready.go:40] duration metric: took 2.804641942s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:10:40.049854  435908 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:10:40.054653  435908 out.go:179] * Done! kubectl is now configured to use "pause-255950" cluster and "default" namespace by default
	I1020 13:10:36.819252  438347 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1020 13:10:36.819339  438347 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1020 13:10:37.437000  438347 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1020 13:10:37.437011  438347 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1020 13:10:38.707458  438347 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1020 13:10:38.707570  438347 profile.go:148] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/missing-upgrade-507750/config.json ...
	I1020 13:10:38.707597  438347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/missing-upgrade-507750/config.json: {Name:mk999f095b8d0d36401a3e23db33e4a12b1d4a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.464585672Z" level=info msg="Started container" PID=2249 containerID=1cab805d87ac6945b1702fdba35f863d0eab965d77251ea63b6877b102faefc9 description=kube-system/kindnet-n2h9x/kindnet-cni id=04f80090-a983-4e94-871c-39d6da16e654 name=/runtime.v1.RuntimeService/StartContainer sandboxID=748f15ad8183f2caec81e4ce0c8d217fb063e11c796b212b5f8d1f2c45629f8c
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.467300081Z" level=info msg="Created container 7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa: kube-system/coredns-66bc5c9577-d684w/coredns" id=e0099207-67ac-40fc-83dc-7170af762ec2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.476203126Z" level=info msg="Starting container: 7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa" id=532dd7ac-7b20-402a-ba5c-4fe9739139a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.510339737Z" level=info msg="Started container" PID=2266 containerID=7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa description=kube-system/coredns-66bc5c9577-d684w/coredns id=532dd7ac-7b20-402a-ba5c-4fe9739139a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa17083d7af9daf6229612fbceea07dfb09c584c0422498ad85829ee7a35e9e7
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.598723639Z" level=info msg="Created container 29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32: kube-system/etcd-pause-255950/etcd" id=a899bdd4-f4d2-4e53-9ad7-a5fc5ce567e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.5994125Z" level=info msg="Starting container: 29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32" id=4ef70bef-e660-4c18-a2c4-6a9032763d5a name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:10:28 pause-255950 crio[2098]: time="2025-10-20T13:10:28.604665389Z" level=info msg="Started container" PID=2306 containerID=29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32 description=kube-system/etcd-pause-255950/etcd id=4ef70bef-e660-4c18-a2c4-6a9032763d5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=91edbe1c6ea7097ae494ba7e28a59e5e94159a3de657bf49a69bc39d43290ff4
	Oct 20 13:10:29 pause-255950 crio[2098]: time="2025-10-20T13:10:29.340650867Z" level=info msg="Created container 4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef: kube-system/kube-proxy-k82rb/kube-proxy" id=8c46f413-c019-45e6-90dd-dee999d2a56f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:10:29 pause-255950 crio[2098]: time="2025-10-20T13:10:29.344861062Z" level=info msg="Starting container: 4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef" id=9e2f90ef-2a38-4860-a936-ef39fd677f7b name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:10:29 pause-255950 crio[2098]: time="2025-10-20T13:10:29.350456732Z" level=info msg="Started container" PID=2282 containerID=4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef description=kube-system/kube-proxy-k82rb/kube-proxy id=9e2f90ef-2a38-4860-a936-ef39fd677f7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0067948d99ab84bd8dc4e2bfe92243e581c5a68cabc93eb91940bd16f5c2b4b3
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.898739632Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.907219518Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.907287104Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.907308397Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.912036661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.913006624Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.913033513Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.920319958Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.920356406Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.920451996Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.94585492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.946181151Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.946386815Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.954792132Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:10:38 pause-255950 crio[2098]: time="2025-10-20T13:10:38.955053591Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	29e036fd5499f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      1                   91edbe1c6ea70       etcd-pause-255950                      kube-system
	7f47d3906e4d6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   18 seconds ago      Running             coredns                   1                   aa17083d7af9d       coredns-66bc5c9577-d684w               kube-system
	4594ac9fd9b14       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   18 seconds ago      Running             kube-proxy                1                   0067948d99ab8       kube-proxy-k82rb                       kube-system
	1cab805d87ac6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   18 seconds ago      Running             kindnet-cni               1                   748f15ad8183f       kindnet-n2h9x                          kube-system
	fdcefaee03154       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            1                   178c199d7d7b1       kube-scheduler-pause-255950            kube-system
	8cce483217954       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            1                   bc376a4cd9ae8       kube-apiserver-pause-255950            kube-system
	b156159ae4110       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   1                   0113520ddb798       kube-controller-manager-pause-255950   kube-system
	39011d72401dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   31 seconds ago      Exited              coredns                   0                   aa17083d7af9d       coredns-66bc5c9577-d684w               kube-system
	6f5a1501f1b44       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   42 seconds ago      Exited              kindnet-cni               0                   748f15ad8183f       kindnet-n2h9x                          kube-system
	a9e3751b02c02       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   42 seconds ago      Exited              kube-proxy                0                   0067948d99ab8       kube-proxy-k82rb                       kube-system
	e7447a699d783       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   59 seconds ago      Exited              kube-controller-manager   0                   0113520ddb798       kube-controller-manager-pause-255950   kube-system
	af0fb8c09a9f5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   59 seconds ago      Exited              etcd                      0                   91edbe1c6ea70       etcd-pause-255950                      kube-system
	f1a5586b1fcb4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   59 seconds ago      Exited              kube-apiserver            0                   bc376a4cd9ae8       kube-apiserver-pause-255950            kube-system
	e80f970d8b4aa       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   59 seconds ago      Exited              kube-scheduler            0                   178c199d7d7b1       kube-scheduler-pause-255950            kube-system
	
	
	==> coredns [39011d72401dcb55eb79e40b61f09e4f72691eb0d5f5693ff57d74a14f3d0718] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42666 - 59895 "HINFO IN 3938642766750719388.8342086583717484870. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024473909s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7f47d3906e4d63bb8baa188f1b538f9d919157f5c6a627585c662c9b749067fa] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42673 - 31354 "HINFO IN 3457138896136666846.3717781881670930441. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043221584s
	
	
	==> describe nodes <==
	Name:               pause-255950
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-255950
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=pause-255950
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_09_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:09:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-255950
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:10:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:09:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:09:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:09:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:10:29 +0000   Mon, 20 Oct 2025 13:10:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-255950
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                bb68224a-25df-447e-8a61-227046f8809e
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-d684w                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     44s
	  kube-system                 etcd-pause-255950                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         48s
	  kube-system                 kindnet-n2h9x                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      44s
	  kube-system                 kube-apiserver-pause-255950             250m (12%)    0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-controller-manager-pause-255950    200m (10%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-proxy-k82rb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-scheduler-pause-255950             100m (5%)     0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 42s                kube-proxy       
	  Normal   Starting                 11s                kube-proxy       
	  Normal   NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node pause-255950 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node pause-255950 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node pause-255950 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 49s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 49s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  48s                kubelet          Node pause-255950 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s                kubelet          Node pause-255950 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s                kubelet          Node pause-255950 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           45s                node-controller  Node pause-255950 event: Registered Node pause-255950 in Controller
	  Normal   NodeReady                32s                kubelet          Node pause-255950 status is now: NodeReady
	  Normal   RegisteredNode           7s                 node-controller  Node pause-255950 event: Registered Node pause-255950 in Controller
	
	
	==> dmesg <==
	[Oct20 12:44] overlayfs: idmapped layers are currently not supported
	[Oct20 12:45] overlayfs: idmapped layers are currently not supported
	[ +37.059511] overlayfs: idmapped layers are currently not supported
	[Oct20 12:46] overlayfs: idmapped layers are currently not supported
	[Oct20 12:47] overlayfs: idmapped layers are currently not supported
	[  +3.282483] overlayfs: idmapped layers are currently not supported
	[Oct20 12:49] overlayfs: idmapped layers are currently not supported
	[Oct20 12:50] overlayfs: idmapped layers are currently not supported
	[Oct20 12:51] overlayfs: idmapped layers are currently not supported
	[Oct20 12:56] overlayfs: idmapped layers are currently not supported
	[Oct20 12:57] overlayfs: idmapped layers are currently not supported
	[Oct20 12:58] overlayfs: idmapped layers are currently not supported
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29e036fd5499fede15134ad49e52c1352b0e33747d91795fddfc0d0129956e32] <==
	{"level":"warn","ts":"2025-10-20T13:10:31.897800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:31.986354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.019273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.041485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.105984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.114009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.175891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.218733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.246470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.350709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.392846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.402642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.479908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.501482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.542649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.600266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.644998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.695062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.801511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.802620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.888792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:32.934584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:33.012485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:33.058897Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:10:33.211366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	
	
	==> etcd [af0fb8c09a9f53a737646b19da0465322c52fc0d464cd81be947d0208be197e5] <==
	{"level":"warn","ts":"2025-10-20T13:09:51.849207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:51.893000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:51.950129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.005481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.044972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.081621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:09:52.237715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42830","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-20T13:10:20.264149Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-20T13:10:20.264208Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-255950","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-20T13:10:20.264350Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-20T13:10:20.421844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-20T13:10:20.422038Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T13:10:20.422103Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-20T13:10:20.422233Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-20T13:10:20.422282Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422520Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422580Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-20T13:10:20.422615Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422456Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-20T13:10:20.422773Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-20T13:10:20.422811Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T13:10:20.426025Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-20T13:10:20.426175Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-20T13:10:20.426245Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:10:20.426294Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-255950","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 13:10:47 up  2:53,  0 user,  load average: 6.16, 2.79, 1.99
	Linux pause-255950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cab805d87ac6945b1702fdba35f863d0eab965d77251ea63b6877b102faefc9] <==
	I1020 13:10:28.617542       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:10:28.626184       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:10:28.626408       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:10:28.626462       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:10:28.628747       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:10:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:10:28.899725       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:10:28.899749       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:10:28.899759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:10:28.900564       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:10:35.636839       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:10:35.648609       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:10:35.648948       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:10:35.649158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1020 13:10:36.799990       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:10:36.800038       1 metrics.go:72] Registering metrics
	I1020 13:10:36.800117       1 controller.go:711] "Syncing nftables rules"
	I1020 13:10:38.897940       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:10:38.898223       1 main.go:301] handling current node
	
	
	==> kindnet [6f5a1501f1b448340bc0ee77a84c6377aba8c6891b0578c346a3bc68650bff81] <==
	I1020 13:10:04.116421       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:10:04.116913       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:10:04.117059       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:10:04.117097       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:10:04.117136       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:10:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:10:04.315454       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:10:04.400699       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:10:04.400798       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:10:04.400956       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 13:10:04.601179       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:10:04.601204       1 metrics.go:72] Registering metrics
	I1020 13:10:04.601267       1 controller.go:711] "Syncing nftables rules"
	I1020 13:10:14.316625       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:10:14.316691       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8cce483217954951ae86feef813e26cc38a286c0b7682a2b390450e5f79b1405] <==
	I1020 13:10:35.690269       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1020 13:10:35.690530       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:10:35.690634       1 aggregator.go:171] initial CRD sync complete...
	I1020 13:10:35.690784       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 13:10:35.690826       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:10:35.690875       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:10:35.690702       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:10:35.708664       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:10:35.712438       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:10:35.712588       1 policy_source.go:240] refreshing policies
	I1020 13:10:35.732434       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 13:10:35.732582       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 13:10:35.745091       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 13:10:35.745181       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 13:10:35.745321       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 13:10:35.745559       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:10:35.745655       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 13:10:35.752296       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:10:35.772527       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:10:35.851053       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:10:38.513369       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:10:39.530766       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:10:39.692041       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:10:39.743446       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:10:39.845994       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [f1a5586b1fcb4d600daf022adc9eba64e1f25ffc5eb78d36bc7acd7bae7a4bd0] <==
	W1020 13:10:20.316979       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317095       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317227       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317393       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317585       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317671       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.317857       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.318310       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.318518       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.318766       1 logging.go:55] [core] [Channel #26 SubChannel #28]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319025       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319207       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319440       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319574       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319733       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.316840       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.319448       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.320133       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.320231       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.320342       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.316146       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321668       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321324       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321368       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1020 13:10:20.321452       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b156159ae41104e20ecf243e358f57a6b324c915fc74990fce030e1be3206013] <==
	I1020 13:10:39.532545       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 13:10:39.534509       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 13:10:39.534642       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 13:10:39.534714       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 13:10:39.537041       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 13:10:39.537188       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:10:39.546002       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:10:39.546146       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:10:39.549517       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:10:39.549593       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:10:39.549603       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:10:39.553302       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:10:39.559162       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 13:10:39.561298       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 13:10:39.562201       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:10:39.570842       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:10:39.576811       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:10:39.586037       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:10:39.586216       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 13:10:39.586300       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:10:39.586491       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-255950"
	I1020 13:10:39.588014       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 13:10:39.593214       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:10:39.609558       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:10:39.609669       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-controller-manager [e7447a699d78349eb0e4959ec142ecd1cecd256c92e233f6777cff9cd3437931] <==
	I1020 13:10:01.695716       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:10:01.695737       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:10:01.695753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:10:01.701800       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 13:10:01.705328       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:10:01.706919       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-255950" podCIDRs=["10.244.0.0/24"]
	I1020 13:10:01.708768       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 13:10:01.715185       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 13:10:01.725695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:10:01.730053       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:10:01.731124       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:10:01.733430       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:10:01.737596       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:10:01.738815       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 13:10:01.746247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:10:01.746338       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:10:01.746371       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:10:01.747924       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 13:10:01.751509       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:10:01.751676       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:10:01.751783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-255950"
	I1020 13:10:01.751854       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 13:10:01.765913       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 13:10:01.766458       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:10:16.754086       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4594ac9fd9b1443ad905383ad952e2b7ebf6f4018b8e1b987824d89c54eea9ef] <==
	I1020 13:10:31.241329       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:10:33.975460       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:10:35.677042       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:10:35.677147       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:10:35.677334       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:10:35.836654       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:10:35.836808       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:10:35.865172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:10:35.865591       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:10:35.865806       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:10:35.867475       1 config.go:200] "Starting service config controller"
	I1020 13:10:35.876670       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:10:35.876813       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:10:35.876957       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:10:35.877050       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:10:35.877080       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:10:35.878007       1 config.go:309] "Starting node config controller"
	I1020 13:10:35.878079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:10:35.878117       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:10:35.977524       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:10:35.977642       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:10:35.977719       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a9e3751b02c02af3363adeea01cd46ab475f5353613ab3c7a2dbd4aaa67ce58e] <==
	I1020 13:10:03.860084       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:10:03.945806       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:10:04.053024       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:10:04.053067       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:10:04.053132       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:10:04.140316       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:10:04.140602       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:10:04.145320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:10:04.145689       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:10:04.145891       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:10:04.150652       1 config.go:200] "Starting service config controller"
	I1020 13:10:04.150732       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:10:04.150791       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:10:04.150820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:10:04.150856       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:10:04.150883       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:10:04.151782       1 config.go:309] "Starting node config controller"
	I1020 13:10:04.151846       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:10:04.151876       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:10:04.251190       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:10:04.251291       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:10:04.251321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e80f970d8b4aad9cbd724b45750c6a4fcef45515362248fb7547d0f931dc4e3f] <==
	I1020 13:09:54.293798       1 serving.go:386] Generated self-signed cert in-memory
	W1020 13:09:55.989268       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 13:09:55.989372       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 13:09:55.989407       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 13:09:55.989448       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 13:09:56.042991       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:09:56.043138       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:09:56.049783       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:09:56.052955       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:09:56.056916       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:09:56.057291       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1020 13:09:56.072890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1020 13:09:57.556086       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:20.265924       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1020 13:10:20.265950       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1020 13:10:20.265972       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1020 13:10:20.266017       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:20.266300       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1020 13:10:20.266347       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fdcefaee03154ff132395152aa895f9040b0c73ea3c4489fcff9c0d96c5ccfdf] <==
	I1020 13:10:31.804345       1 serving.go:386] Generated self-signed cert in-memory
	I1020 13:10:37.396861       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:10:37.396978       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:10:37.409350       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 13:10:37.409470       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 13:10:37.409568       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:37.409619       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:10:37.409671       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:10:37.409704       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:10:37.411256       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:10:37.411347       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:10:37.510382       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:10:37.510522       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 13:10:37.510667       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:10:28 pause-255950 kubelet[1310]: E1020 13:10:28.487444    1310 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-255950?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 20 13:10:28 pause-255950 kubelet[1310]: E1020 13:10:28.487666    1310 controller.go:195] "Failed to update lease" err="Put \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-255950?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused"
	Oct 20 13:10:28 pause-255950 kubelet[1310]: I1020 13:10:28.487695    1310 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Oct 20 13:10:28 pause-255950 kubelet[1310]: E1020 13:10:28.487873    1310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-255950?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="200ms"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.395355    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="30fbb281a3b7e493dc880969a561a086" pod="kube-system/kube-scheduler-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.396196    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-255950\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.396400    1310 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-255950\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.446663    1310 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-255950\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.481427    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="7b593a3ef6240d38d6838ae48ab7be19" pod="kube-system/etcd-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.497363    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="be674be594d729bffaa7ea79994926a9" pod="kube-system/kube-apiserver-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.508663    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="d5a722a848b15da94e63b6ad8776c9c0" pod="kube-system/kube-controller-manager-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.513080    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-n2h9x\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="5e0b3867-09d0-4927-a4de-ed6cd6d71d55" pod="kube-system/kindnet-n2h9x"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.520666    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-k82rb\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="4ac60a08-1196-4c72-9182-1503a7f8d38e" pod="kube-system/kube-proxy-k82rb"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.522135    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-d684w\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="2b94f928-d688-4568-be9a-29d7454bb47f" pod="kube-system/coredns-66bc5c9577-d684w"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.524677    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="7b593a3ef6240d38d6838ae48ab7be19" pod="kube-system/etcd-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.533021    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="be674be594d729bffaa7ea79994926a9" pod="kube-system/kube-apiserver-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.541137    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="d5a722a848b15da94e63b6ad8776c9c0" pod="kube-system/kube-controller-manager-pause-255950"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.556655    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-n2h9x\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="5e0b3867-09d0-4927-a4de-ed6cd6d71d55" pod="kube-system/kindnet-n2h9x"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.564119    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-k82rb\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="4ac60a08-1196-4c72-9182-1503a7f8d38e" pod="kube-system/kube-proxy-k82rb"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.575423    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-d684w\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="2b94f928-d688-4568-be9a-29d7454bb47f" pod="kube-system/coredns-66bc5c9577-d684w"
	Oct 20 13:10:35 pause-255950 kubelet[1310]: E1020 13:10:35.577269    1310 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-255950\" is forbidden: User \"system:node:pause-255950\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-255950' and this object" podUID="30fbb281a3b7e493dc880969a561a086" pod="kube-system/kube-scheduler-pause-255950"
	Oct 20 13:10:38 pause-255950 kubelet[1310]: W1020 13:10:38.280339    1310 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 20 13:10:40 pause-255950 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:10:40 pause-255950 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:10:40 pause-255950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-255950 -n pause-255950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-255950 -n pause-255950: exit status 2 (468.628407ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-255950 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (276.411838ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:19:10Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-995203 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-995203 describe deploy/metrics-server -n kube-system: exit status 1 (83.276766ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-995203 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-995203
helpers_test.go:243: (dbg) docker inspect old-k8s-version-995203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743",
	        "Created": "2025-10-20T13:17:39.717282575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475201,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:17:39.778202925Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/hosts",
	        "LogPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743-json.log",
	        "Name": "/old-k8s-version-995203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-995203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-995203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743",
	                "LowerDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-995203",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-995203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-995203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-995203",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-995203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "779a9d0a19e31dd6a933136d062ed6a39ab1852563029f14c6288e0415392c99",
	            "SandboxKey": "/var/run/docker/netns/779a9d0a19e3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-995203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:d4:05:cb:6c:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e48fc4c3ab83a2a7d44a282549d8182e6b6d0f2aee11543e9c45f4ee745a84b",
	                    "EndpointID": "8e1050ca4c130b55e01cd08019e5a4b207b9b8a94410d05225da72abe8cd73b1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-995203",
	                        "bc62e325c2a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-995203 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-995203 logs -n 25: (1.219664512s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-308474 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo containerd config dump                                                                                                                                                                                                  │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo crio config                                                                                                                                                                                                             │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ delete  │ -p cilium-308474                                                                                                                                                                                                                              │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p force-systemd-env-534257 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-534257  │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ delete  │ -p force-systemd-env-534257                                                                                                                                                                                                                   │ force-systemd-env-534257  │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-066011    │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ delete  │ -p kubernetes-upgrade-314577                                                                                                                                                                                                                  │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p cert-options-123220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ cert-options-123220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ -p cert-options-123220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ delete  │ -p cert-options-123220                                                                                                                                                                                                                        │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:17:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:17:33.696431  474806 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:17:33.696678  474806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:17:33.696707  474806 out.go:374] Setting ErrFile to fd 2...
	I1020 13:17:33.696730  474806 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:17:33.697041  474806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:17:33.697567  474806 out.go:368] Setting JSON to false
	I1020 13:17:33.698657  474806 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10804,"bootTime":1760955450,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:17:33.698756  474806 start.go:141] virtualization:  
	I1020 13:17:33.702515  474806 out.go:179] * [old-k8s-version-995203] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:17:33.707092  474806 notify.go:220] Checking for updates...
	I1020 13:17:33.710713  474806 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:17:33.714351  474806 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:17:33.717633  474806 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:17:33.720849  474806 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:17:33.724035  474806 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:17:33.727256  474806 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:17:33.731000  474806 config.go:182] Loaded profile config "cert-expiration-066011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:17:33.731167  474806 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:17:33.767705  474806 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:17:33.767838  474806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:17:33.842770  474806 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:17:33.832753288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:17:33.842880  474806 docker.go:318] overlay module found
	I1020 13:17:33.846202  474806 out.go:179] * Using the docker driver based on user configuration
	I1020 13:17:33.849173  474806 start.go:305] selected driver: docker
	I1020 13:17:33.849195  474806 start.go:925] validating driver "docker" against <nil>
	I1020 13:17:33.849210  474806 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:17:33.850167  474806 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:17:33.933050  474806 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:17:33.923158197 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:17:33.933209  474806 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 13:17:33.933446  474806 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:17:33.936495  474806 out.go:179] * Using Docker driver with root privileges
	I1020 13:17:33.939460  474806 cni.go:84] Creating CNI manager for ""
	I1020 13:17:33.939531  474806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:17:33.939540  474806 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:17:33.939646  474806 start.go:349] cluster config:
	{Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:17:33.942831  474806 out.go:179] * Starting "old-k8s-version-995203" primary control-plane node in "old-k8s-version-995203" cluster
	I1020 13:17:33.945758  474806 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:17:33.948784  474806 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:17:33.951615  474806 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 13:17:33.951671  474806 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1020 13:17:33.951681  474806 cache.go:58] Caching tarball of preloaded images
	I1020 13:17:33.951782  474806 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:17:33.951793  474806 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1020 13:17:33.951914  474806 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/config.json ...
	I1020 13:17:33.951934  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/config.json: {Name:mk29751a454b3b668f0dde7e3b84236b50699454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:33.952106  474806 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:17:33.973471  474806 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:17:33.973492  474806 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:17:33.973506  474806 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:17:33.973529  474806 start.go:360] acquireMachinesLock for old-k8s-version-995203: {Name:mkd132b643a63689d30b93c4e854e38e99314f40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:17:33.973629  474806 start.go:364] duration metric: took 85.547µs to acquireMachinesLock for "old-k8s-version-995203"
	I1020 13:17:33.973661  474806 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:17:33.973726  474806 start.go:125] createHost starting for "" (driver="docker")
	I1020 13:17:33.977244  474806 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 13:17:33.977483  474806 start.go:159] libmachine.API.Create for "old-k8s-version-995203" (driver="docker")
	I1020 13:17:33.977527  474806 client.go:168] LocalClient.Create starting
	I1020 13:17:33.977603  474806 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 13:17:33.977639  474806 main.go:141] libmachine: Decoding PEM data...
	I1020 13:17:33.977652  474806 main.go:141] libmachine: Parsing certificate...
	I1020 13:17:33.977702  474806 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 13:17:33.977719  474806 main.go:141] libmachine: Decoding PEM data...
	I1020 13:17:33.977729  474806 main.go:141] libmachine: Parsing certificate...
	I1020 13:17:33.978091  474806 cli_runner.go:164] Run: docker network inspect old-k8s-version-995203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 13:17:33.994757  474806 cli_runner.go:211] docker network inspect old-k8s-version-995203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 13:17:33.994835  474806 network_create.go:284] running [docker network inspect old-k8s-version-995203] to gather additional debugging logs...
	I1020 13:17:33.994851  474806 cli_runner.go:164] Run: docker network inspect old-k8s-version-995203
	W1020 13:17:34.015530  474806 cli_runner.go:211] docker network inspect old-k8s-version-995203 returned with exit code 1
	I1020 13:17:34.015564  474806 network_create.go:287] error running [docker network inspect old-k8s-version-995203]: docker network inspect old-k8s-version-995203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-995203 not found
	I1020 13:17:34.015577  474806 network_create.go:289] output of [docker network inspect old-k8s-version-995203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-995203 not found
	
	** /stderr **
	I1020 13:17:34.015709  474806 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:17:34.032886  474806 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31214b196961 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:99:57:10:1b:40} reservation:<nil>}
	I1020 13:17:34.033191  474806 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf6e9e751b4a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:0d:2b:68:24:bc} reservation:<nil>}
	I1020 13:17:34.034069  474806 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-076921d0625d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:c5:51:b1:3d:0c} reservation:<nil>}
	I1020 13:17:34.034518  474806 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a21390}
	I1020 13:17:34.034541  474806 network_create.go:124] attempt to create docker network old-k8s-version-995203 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1020 13:17:34.034597  474806 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-995203 old-k8s-version-995203
	I1020 13:17:34.107892  474806 network_create.go:108] docker network old-k8s-version-995203 192.168.76.0/24 created
	I1020 13:17:34.107929  474806 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-995203" container
	I1020 13:17:34.108016  474806 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 13:17:34.124405  474806 cli_runner.go:164] Run: docker volume create old-k8s-version-995203 --label name.minikube.sigs.k8s.io=old-k8s-version-995203 --label created_by.minikube.sigs.k8s.io=true
	I1020 13:17:34.142054  474806 oci.go:103] Successfully created a docker volume old-k8s-version-995203
	I1020 13:17:34.142150  474806 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-995203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-995203 --entrypoint /usr/bin/test -v old-k8s-version-995203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 13:17:34.650454  474806 oci.go:107] Successfully prepared a docker volume old-k8s-version-995203
	I1020 13:17:34.650491  474806 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 13:17:34.650509  474806 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 13:17:34.650582  474806 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-995203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 13:17:39.648235  474806 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-995203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.997532255s)
	I1020 13:17:39.648270  474806 kic.go:203] duration metric: took 4.997757472s to extract preloaded images to volume ...
	W1020 13:17:39.648445  474806 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 13:17:39.648574  474806 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 13:17:39.702317  474806 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-995203 --name old-k8s-version-995203 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-995203 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-995203 --network old-k8s-version-995203 --ip 192.168.76.2 --volume old-k8s-version-995203:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 13:17:40.034260  474806 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Running}}
	I1020 13:17:40.056355  474806 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:17:40.078238  474806 cli_runner.go:164] Run: docker exec old-k8s-version-995203 stat /var/lib/dpkg/alternatives/iptables
	I1020 13:17:40.145482  474806 oci.go:144] the created container "old-k8s-version-995203" has a running status.
	I1020 13:17:40.145514  474806 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa...
	I1020 13:17:40.558901  474806 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 13:17:40.584694  474806 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:17:40.613664  474806 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 13:17:40.613688  474806 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-995203 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 13:17:40.682141  474806 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:17:40.707678  474806 machine.go:93] provisionDockerMachine start ...
	I1020 13:17:40.707785  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:40.728951  474806 main.go:141] libmachine: Using SSH client type: native
	I1020 13:17:40.729279  474806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1020 13:17:40.729289  474806 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:17:40.730969  474806 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:17:43.883839  474806 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-995203
	
	I1020 13:17:43.883867  474806 ubuntu.go:182] provisioning hostname "old-k8s-version-995203"
	I1020 13:17:43.883939  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:43.903290  474806 main.go:141] libmachine: Using SSH client type: native
	I1020 13:17:43.903635  474806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1020 13:17:43.903653  474806 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-995203 && echo "old-k8s-version-995203" | sudo tee /etc/hostname
	I1020 13:17:44.066101  474806 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-995203
	
	I1020 13:17:44.066203  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:44.085243  474806 main.go:141] libmachine: Using SSH client type: native
	I1020 13:17:44.085574  474806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1020 13:17:44.085596  474806 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-995203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-995203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-995203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:17:44.237021  474806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:17:44.237057  474806 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:17:44.237100  474806 ubuntu.go:190] setting up certificates
	I1020 13:17:44.237131  474806 provision.go:84] configureAuth start
	I1020 13:17:44.237222  474806 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-995203
	I1020 13:17:44.258110  474806 provision.go:143] copyHostCerts
	I1020 13:17:44.258189  474806 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:17:44.258206  474806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:17:44.258288  474806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:17:44.258387  474806 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:17:44.258396  474806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:17:44.258424  474806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:17:44.258496  474806 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:17:44.258506  474806 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:17:44.258533  474806 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:17:44.258594  474806 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-995203 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-995203]
	I1020 13:17:44.481219  474806 provision.go:177] copyRemoteCerts
	I1020 13:17:44.481334  474806 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:17:44.481385  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:44.499947  474806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:17:44.608414  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:17:44.626700  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1020 13:17:44.645007  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 13:17:44.662663  474806 provision.go:87] duration metric: took 425.502487ms to configureAuth
	I1020 13:17:44.662692  474806 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:17:44.662879  474806 config.go:182] Loaded profile config "old-k8s-version-995203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 13:17:44.662990  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:44.681533  474806 main.go:141] libmachine: Using SSH client type: native
	I1020 13:17:44.681856  474806 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1020 13:17:44.681877  474806 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:17:44.950549  474806 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:17:44.950570  474806 machine.go:96] duration metric: took 4.242872332s to provisionDockerMachine
	I1020 13:17:44.950580  474806 client.go:171] duration metric: took 10.97304652s to LocalClient.Create
	I1020 13:17:44.950594  474806 start.go:167] duration metric: took 10.973112375s to libmachine.API.Create "old-k8s-version-995203"
	I1020 13:17:44.950601  474806 start.go:293] postStartSetup for "old-k8s-version-995203" (driver="docker")
	I1020 13:17:44.950611  474806 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:17:44.950675  474806 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:17:44.950713  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:44.968625  474806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:17:45.091903  474806 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:17:45.097527  474806 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:17:45.097606  474806 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:17:45.097639  474806 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:17:45.099191  474806 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:17:45.099332  474806 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:17:45.099482  474806 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:17:45.115360  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:17:45.162557  474806 start.go:296] duration metric: took 211.939279ms for postStartSetup
	I1020 13:17:45.163000  474806 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-995203
	I1020 13:17:45.183283  474806 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/config.json ...
	I1020 13:17:45.183740  474806 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:17:45.183907  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:45.205657  474806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:17:45.326566  474806 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:17:45.332285  474806 start.go:128] duration metric: took 11.358539605s to createHost
	I1020 13:17:45.332319  474806 start.go:83] releasing machines lock for "old-k8s-version-995203", held for 11.358681655s
	I1020 13:17:45.332464  474806 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-995203
	I1020 13:17:45.359182  474806 ssh_runner.go:195] Run: cat /version.json
	I1020 13:17:45.359247  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:45.359512  474806 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:17:45.359615  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:17:45.378932  474806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:17:45.380823  474806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:17:45.484166  474806 ssh_runner.go:195] Run: systemctl --version
	I1020 13:17:45.577811  474806 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:17:45.618847  474806 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:17:45.623377  474806 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:17:45.623450  474806 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:17:45.653115  474806 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 13:17:45.653141  474806 start.go:495] detecting cgroup driver to use...
	I1020 13:17:45.653173  474806 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:17:45.653221  474806 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:17:45.670581  474806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:17:45.683909  474806 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:17:45.683998  474806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:17:45.701912  474806 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:17:45.720111  474806 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:17:45.842026  474806 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:17:45.975273  474806 docker.go:234] disabling docker service ...
	I1020 13:17:45.975362  474806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:17:45.998107  474806 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:17:46.015512  474806 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:17:46.134044  474806 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:17:46.258002  474806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:17:46.270614  474806 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:17:46.284058  474806 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1020 13:17:46.284155  474806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:17:46.292872  474806 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:17:46.293000  474806 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:17:46.301648  474806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:17:46.310094  474806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:17:46.319082  474806 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:17:46.327381  474806 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:17:46.336246  474806 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:17:46.349903  474806 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:17:46.358779  474806 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:17:46.366493  474806 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:17:46.373839  474806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:17:46.492832  474806 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:17:46.623939  474806 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:17:46.624009  474806 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:17:46.627795  474806 start.go:563] Will wait 60s for crictl version
	I1020 13:17:46.627863  474806 ssh_runner.go:195] Run: which crictl
	I1020 13:17:46.631422  474806 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:17:46.655808  474806 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:17:46.655894  474806 ssh_runner.go:195] Run: crio --version
	I1020 13:17:46.682886  474806 ssh_runner.go:195] Run: crio --version
	I1020 13:17:46.714970  474806 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1020 13:17:46.717554  474806 cli_runner.go:164] Run: docker network inspect old-k8s-version-995203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:17:46.735368  474806 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:17:46.739490  474806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:17:46.749884  474806 kubeadm.go:883] updating cluster {Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:17:46.749996  474806 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 13:17:46.750061  474806 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:17:46.783123  474806 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:17:46.783143  474806 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:17:46.783198  474806 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:17:46.815958  474806 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:17:46.816036  474806 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:17:46.816060  474806 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1020 13:17:46.816179  474806 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-995203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:17:46.816278  474806 ssh_runner.go:195] Run: crio config
	I1020 13:17:46.889860  474806 cni.go:84] Creating CNI manager for ""
	I1020 13:17:46.889882  474806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:17:46.889901  474806 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:17:46.889924  474806 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-995203 NodeName:old-k8s-version-995203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:17:46.890062  474806 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-995203"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:17:46.890150  474806 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1020 13:17:46.898654  474806 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:17:46.898767  474806 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:17:46.906678  474806 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1020 13:17:46.919493  474806 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:17:46.933108  474806 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1020 13:17:46.945636  474806 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:17:46.949152  474806 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:17:46.958366  474806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:17:47.076465  474806 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:17:47.093723  474806 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203 for IP: 192.168.76.2
	I1020 13:17:47.093788  474806 certs.go:195] generating shared ca certs ...
	I1020 13:17:47.093807  474806 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:47.093995  474806 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:17:47.094070  474806 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:17:47.094105  474806 certs.go:257] generating profile certs ...
	I1020 13:17:47.094183  474806 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.key
	I1020 13:17:47.094229  474806 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt with IP's: []
	I1020 13:17:47.854147  474806 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt ...
	I1020 13:17:47.854220  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: {Name:mk29aa8e22c3564d03212206d883000e9f35947d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:47.854490  474806 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.key ...
	I1020 13:17:47.854524  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.key: {Name:mk1ed695be66ccc26bf1ac9d2ebf0db48b977924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:47.854693  474806 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key.8c7cc26d
	I1020 13:17:47.854734  474806 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt.8c7cc26d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1020 13:17:48.491496  474806 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt.8c7cc26d ...
	I1020 13:17:48.491527  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt.8c7cc26d: {Name:mk781e6ec81a37e3b3cc28038fa5aae855142515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:48.491730  474806 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key.8c7cc26d ...
	I1020 13:17:48.491748  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key.8c7cc26d: {Name:mk62040b28a422db74875808f29faa1a2ed999e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:48.491829  474806 certs.go:382] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt.8c7cc26d -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt
	I1020 13:17:48.491924  474806 certs.go:386] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key.8c7cc26d -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key
	I1020 13:17:48.491988  474806 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key
	I1020 13:17:48.492007  474806 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.crt with IP's: []
	I1020 13:17:48.965599  474806 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.crt ...
	I1020 13:17:48.965627  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.crt: {Name:mk1a58bf6b1319d2fa59666985bf61a88ec8255e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:48.965813  474806 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key ...
	I1020 13:17:48.965827  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key: {Name:mkbeb53ce3b2969b38f11cc3d82bae28b7aa9233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:17:48.966016  474806 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:17:48.966059  474806 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:17:48.966072  474806 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:17:48.966099  474806 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:17:48.966125  474806 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:17:48.966149  474806 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:17:48.966198  474806 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:17:48.966811  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:17:48.986665  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:17:49.005765  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:17:49.025153  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:17:49.043150  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1020 13:17:49.061364  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:17:49.079885  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:17:49.098230  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 13:17:49.115681  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:17:49.134060  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:17:49.151857  474806 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:17:49.170918  474806 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:17:49.184778  474806 ssh_runner.go:195] Run: openssl version
	I1020 13:17:49.191075  474806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:17:49.199926  474806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:17:49.203853  474806 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:17:49.203920  474806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:17:49.245431  474806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:17:49.254072  474806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:17:49.262416  474806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:17:49.266306  474806 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:17:49.266385  474806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:17:49.311913  474806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:17:49.321215  474806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:17:49.329691  474806 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:17:49.333814  474806 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:17:49.333934  474806 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:17:49.378720  474806 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:17:49.388288  474806 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:17:49.393407  474806 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 13:17:49.393520  474806 kubeadm.go:400] StartCluster: {Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:17:49.393670  474806 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:17:49.393792  474806 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:17:49.423455  474806 cri.go:89] found id: ""
	I1020 13:17:49.423624  474806 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:17:49.431428  474806 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 13:17:49.439152  474806 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 13:17:49.439225  474806 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 13:17:49.446910  474806 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 13:17:49.446930  474806 kubeadm.go:157] found existing configuration files:
	
	I1020 13:17:49.446984  474806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 13:17:49.454808  474806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 13:17:49.454875  474806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 13:17:49.462301  474806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 13:17:49.469971  474806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 13:17:49.470060  474806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 13:17:49.477379  474806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 13:17:49.484777  474806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 13:17:49.484871  474806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 13:17:49.493024  474806 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 13:17:49.500703  474806 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 13:17:49.500787  474806 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 13:17:49.508938  474806 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 13:17:49.554383  474806 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1020 13:17:49.554781  474806 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 13:17:49.598102  474806 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 13:17:49.598207  474806 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1020 13:17:49.598254  474806 kubeadm.go:318] OS: Linux
	I1020 13:17:49.598334  474806 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 13:17:49.598404  474806 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1020 13:17:49.598468  474806 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 13:17:49.598531  474806 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 13:17:49.598594  474806 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 13:17:49.598658  474806 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 13:17:49.598720  474806 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 13:17:49.598783  474806 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 13:17:49.598847  474806 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1020 13:17:49.683800  474806 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 13:17:49.683928  474806 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 13:17:49.684044  474806 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1020 13:17:49.843303  474806 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 13:17:49.849813  474806 out.go:252]   - Generating certificates and keys ...
	I1020 13:17:49.849913  474806 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 13:17:49.849988  474806 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 13:17:50.429751  474806 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 13:17:51.066113  474806 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 13:17:51.416216  474806 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 13:17:51.725487  474806 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 13:17:52.139948  474806 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 13:17:52.140247  474806 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-995203] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1020 13:17:52.679977  474806 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 13:17:52.680135  474806 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-995203] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1020 13:17:52.919053  474806 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 13:17:53.250247  474806 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 13:17:54.177024  474806 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 13:17:54.177353  474806 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 13:17:54.441955  474806 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 13:17:55.235834  474806 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1020 13:17:55.424020  474806 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 13:17:55.802076  474806 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 13:17:55.802181  474806 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 13:17:55.804756  474806 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 13:17:55.808444  474806 out.go:252]   - Booting up control plane ...
	I1020 13:17:55.808549  474806 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 13:17:55.808632  474806 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 13:17:55.808710  474806 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 13:17:55.827118  474806 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 13:17:55.827925  474806 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 13:17:55.828330  474806 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 13:17:55.968352  474806 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1020 13:18:03.974548  474806 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.006874 seconds
	I1020 13:18:03.974684  474806 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 13:18:03.989886  474806 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 13:18:04.523003  474806 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 13:18:04.523269  474806 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-995203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 13:18:05.036393  474806 kubeadm.go:318] [bootstrap-token] Using token: 1kd151.wpmdjjusamerma6d
	I1020 13:18:05.039356  474806 out.go:252]   - Configuring RBAC rules ...
	I1020 13:18:05.039492  474806 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 13:18:05.045155  474806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 13:18:05.054371  474806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 13:18:05.059367  474806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 13:18:05.063756  474806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 13:18:05.070016  474806 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 13:18:05.084258  474806 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 13:18:05.372797  474806 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 13:18:05.463673  474806 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 13:18:05.465021  474806 kubeadm.go:318] 
	I1020 13:18:05.465099  474806 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 13:18:05.465114  474806 kubeadm.go:318] 
	I1020 13:18:05.465196  474806 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 13:18:05.465203  474806 kubeadm.go:318] 
	I1020 13:18:05.465230  474806 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 13:18:05.465320  474806 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 13:18:05.465386  474806 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 13:18:05.465396  474806 kubeadm.go:318] 
	I1020 13:18:05.465453  474806 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 13:18:05.465461  474806 kubeadm.go:318] 
	I1020 13:18:05.465510  474806 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 13:18:05.465519  474806 kubeadm.go:318] 
	I1020 13:18:05.465574  474806 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 13:18:05.465656  474806 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 13:18:05.465732  474806 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 13:18:05.465739  474806 kubeadm.go:318] 
	I1020 13:18:05.465846  474806 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 13:18:05.465935  474806 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 13:18:05.465944  474806 kubeadm.go:318] 
	I1020 13:18:05.466031  474806 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1kd151.wpmdjjusamerma6d \
	I1020 13:18:05.466144  474806 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 \
	I1020 13:18:05.466170  474806 kubeadm.go:318] 	--control-plane 
	I1020 13:18:05.466179  474806 kubeadm.go:318] 
	I1020 13:18:05.466288  474806 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 13:18:05.466300  474806 kubeadm.go:318] 
	I1020 13:18:05.466385  474806 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1kd151.wpmdjjusamerma6d \
	I1020 13:18:05.466496  474806 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 
	I1020 13:18:05.475729  474806 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 13:18:05.475919  474806 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 13:18:05.475964  474806 cni.go:84] Creating CNI manager for ""
	I1020 13:18:05.475987  474806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:18:05.480959  474806 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 13:18:05.483904  474806 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 13:18:05.488945  474806 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1020 13:18:05.488970  474806 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 13:18:05.520781  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 13:18:06.478359  474806 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 13:18:06.478503  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:06.478583  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-995203 minikube.k8s.io/updated_at=2025_10_20T13_18_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=old-k8s-version-995203 minikube.k8s.io/primary=true
	I1020 13:18:06.653457  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:06.653522  474806 ops.go:34] apiserver oom_adj: -16
	I1020 13:18:07.153712  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:07.653528  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:08.154393  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:08.654374  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:09.154158  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:09.653528  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:10.153658  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:10.653592  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:11.153610  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:11.654073  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:12.153587  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:12.654316  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:13.154464  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:13.654054  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:14.154043  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:14.654182  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:15.154100  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:15.653726  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:16.153612  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:16.653882  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:17.154396  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:17.654332  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:18.153569  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:18.654013  474806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:18:18.744485  474806 kubeadm.go:1113] duration metric: took 12.266032526s to wait for elevateKubeSystemPrivileges
	I1020 13:18:18.744513  474806 kubeadm.go:402] duration metric: took 29.350996899s to StartCluster
	I1020 13:18:18.744529  474806 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:18:18.744589  474806 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:18:18.745566  474806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:18:18.745772  474806 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:18:18.745899  474806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 13:18:18.746134  474806 config.go:182] Loaded profile config "old-k8s-version-995203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 13:18:18.746165  474806 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:18:18.746264  474806 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-995203"
	I1020 13:18:18.746279  474806 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-995203"
	I1020 13:18:18.746300  474806 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:18:18.746791  474806 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:18:18.747027  474806 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-995203"
	I1020 13:18:18.747069  474806 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-995203"
	I1020 13:18:18.747345  474806 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:18:18.749579  474806 out.go:179] * Verifying Kubernetes components...
	I1020 13:18:18.754129  474806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:18:18.786685  474806 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-995203"
	I1020 13:18:18.786730  474806 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:18:18.787198  474806 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:18:18.809182  474806 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:18:18.812434  474806 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:18:18.812481  474806 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:18:18.812550  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:18:18.821507  474806 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:18:18.821535  474806 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:18:18.821599  474806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:18:18.843480  474806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:18:18.856520  474806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:18:19.125393  474806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 13:18:19.125516  474806 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:18:19.159801  474806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:18:19.177577  474806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:18:19.757299  474806 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-995203" to be "Ready" ...
	I1020 13:18:19.757624  474806 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1020 13:18:20.107922  474806 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 13:18:20.110909  474806 addons.go:514] duration metric: took 1.364730797s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 13:18:20.261950  474806 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-995203" context rescaled to 1 replicas
	W1020 13:18:21.761192  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:24.260410  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:26.260484  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:28.261363  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:30.760664  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:33.260689  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:35.760654  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:37.770825  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:40.260566  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:42.261058  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:44.761204  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:47.260521  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:49.260743  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:51.260944  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:53.760625  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	W1020 13:18:55.760877  474806 node_ready.go:57] node "old-k8s-version-995203" has "Ready":"False" status (will retry)
	I1020 13:18:56.760280  474806 node_ready.go:49] node "old-k8s-version-995203" is "Ready"
	I1020 13:18:56.760314  474806 node_ready.go:38] duration metric: took 37.002940326s for node "old-k8s-version-995203" to be "Ready" ...
	I1020 13:18:56.760330  474806 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:18:56.760425  474806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:18:56.773078  474806 api_server.go:72] duration metric: took 38.027276614s to wait for apiserver process to appear ...
	I1020 13:18:56.773103  474806 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:18:56.773122  474806 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:18:56.782845  474806 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:18:56.784206  474806 api_server.go:141] control plane version: v1.28.0
	I1020 13:18:56.784237  474806 api_server.go:131] duration metric: took 11.125015ms to wait for apiserver health ...
	I1020 13:18:56.784247  474806 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:18:56.787895  474806 system_pods.go:59] 8 kube-system pods found
	I1020 13:18:56.787931  474806 system_pods.go:61] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:18:56.787940  474806 system_pods.go:61] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running
	I1020 13:18:56.787945  474806 system_pods.go:61] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:18:56.787950  474806 system_pods.go:61] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running
	I1020 13:18:56.787956  474806 system_pods.go:61] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running
	I1020 13:18:56.787963  474806 system_pods.go:61] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:18:56.787972  474806 system_pods.go:61] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running
	I1020 13:18:56.787981  474806 system_pods.go:61] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:18:56.787991  474806 system_pods.go:74] duration metric: took 3.721352ms to wait for pod list to return data ...
	I1020 13:18:56.788000  474806 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:18:56.790544  474806 default_sa.go:45] found service account: "default"
	I1020 13:18:56.790572  474806 default_sa.go:55] duration metric: took 2.562437ms for default service account to be created ...
	I1020 13:18:56.790581  474806 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:18:56.793827  474806 system_pods.go:86] 8 kube-system pods found
	I1020 13:18:56.793861  474806 system_pods.go:89] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:18:56.793870  474806 system_pods.go:89] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running
	I1020 13:18:56.793877  474806 system_pods.go:89] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:18:56.793885  474806 system_pods.go:89] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running
	I1020 13:18:56.793895  474806 system_pods.go:89] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running
	I1020 13:18:56.793899  474806 system_pods.go:89] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:18:56.793904  474806 system_pods.go:89] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running
	I1020 13:18:56.793910  474806 system_pods.go:89] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:18:56.793930  474806 retry.go:31] will retry after 267.661989ms: missing components: kube-dns
	I1020 13:18:57.068594  474806 system_pods.go:86] 8 kube-system pods found
	I1020 13:18:57.068632  474806 system_pods.go:89] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:18:57.068641  474806 system_pods.go:89] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running
	I1020 13:18:57.068649  474806 system_pods.go:89] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:18:57.068668  474806 system_pods.go:89] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running
	I1020 13:18:57.068674  474806 system_pods.go:89] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running
	I1020 13:18:57.068678  474806 system_pods.go:89] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:18:57.068689  474806 system_pods.go:89] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running
	I1020 13:18:57.068695  474806 system_pods.go:89] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:18:57.068709  474806 retry.go:31] will retry after 321.377444ms: missing components: kube-dns
	I1020 13:18:57.395185  474806 system_pods.go:86] 8 kube-system pods found
	I1020 13:18:57.395230  474806 system_pods.go:89] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:18:57.395237  474806 system_pods.go:89] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running
	I1020 13:18:57.395244  474806 system_pods.go:89] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:18:57.395249  474806 system_pods.go:89] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running
	I1020 13:18:57.395255  474806 system_pods.go:89] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running
	I1020 13:18:57.395259  474806 system_pods.go:89] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:18:57.395263  474806 system_pods.go:89] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running
	I1020 13:18:57.395270  474806 system_pods.go:89] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:18:57.395285  474806 retry.go:31] will retry after 487.925953ms: missing components: kube-dns
	I1020 13:18:57.887062  474806 system_pods.go:86] 8 kube-system pods found
	I1020 13:18:57.887096  474806 system_pods.go:89] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Running
	I1020 13:18:57.887105  474806 system_pods.go:89] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running
	I1020 13:18:57.887110  474806 system_pods.go:89] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:18:57.887115  474806 system_pods.go:89] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running
	I1020 13:18:57.887120  474806 system_pods.go:89] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running
	I1020 13:18:57.887124  474806 system_pods.go:89] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:18:57.887128  474806 system_pods.go:89] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running
	I1020 13:18:57.887132  474806 system_pods.go:89] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Running
	I1020 13:18:57.887141  474806 system_pods.go:126] duration metric: took 1.096553384s to wait for k8s-apps to be running ...
	I1020 13:18:57.887154  474806 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:18:57.887218  474806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:18:57.900576  474806 system_svc.go:56] duration metric: took 13.412724ms WaitForService to wait for kubelet
	I1020 13:18:57.900605  474806 kubeadm.go:586] duration metric: took 39.15480991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:18:57.900623  474806 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:18:57.903408  474806 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:18:57.903451  474806 node_conditions.go:123] node cpu capacity is 2
	I1020 13:18:57.903466  474806 node_conditions.go:105] duration metric: took 2.836294ms to run NodePressure ...
	I1020 13:18:57.903478  474806 start.go:241] waiting for startup goroutines ...
	I1020 13:18:57.903491  474806 start.go:246] waiting for cluster config update ...
	I1020 13:18:57.903502  474806 start.go:255] writing updated cluster config ...
	I1020 13:18:57.903820  474806 ssh_runner.go:195] Run: rm -f paused
	I1020 13:18:57.907748  474806 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:18:57.912154  474806 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vqvss" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:57.919696  474806 pod_ready.go:94] pod "coredns-5dd5756b68-vqvss" is "Ready"
	I1020 13:18:57.919726  474806 pod_ready.go:86] duration metric: took 7.54627ms for pod "coredns-5dd5756b68-vqvss" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:57.922827  474806 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:57.928031  474806 pod_ready.go:94] pod "etcd-old-k8s-version-995203" is "Ready"
	I1020 13:18:57.928095  474806 pod_ready.go:86] duration metric: took 5.238712ms for pod "etcd-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:57.931338  474806 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:57.936254  474806 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-995203" is "Ready"
	I1020 13:18:57.936286  474806 pod_ready.go:86] duration metric: took 4.925691ms for pod "kube-apiserver-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:57.939374  474806 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:58.313205  474806 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-995203" is "Ready"
	I1020 13:18:58.313242  474806 pod_ready.go:86] duration metric: took 373.842874ms for pod "kube-controller-manager-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:58.514319  474806 pod_ready.go:83] waiting for pod "kube-proxy-n8zpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:58.912041  474806 pod_ready.go:94] pod "kube-proxy-n8zpg" is "Ready"
	I1020 13:18:58.912069  474806 pod_ready.go:86] duration metric: took 397.7216ms for pod "kube-proxy-n8zpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:59.113222  474806 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:59.517554  474806 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-995203" is "Ready"
	I1020 13:18:59.517580  474806 pod_ready.go:86] duration metric: took 404.312922ms for pod "kube-scheduler-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:18:59.517593  474806 pod_ready.go:40] duration metric: took 1.609814602s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:18:59.568336  474806 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1020 13:18:59.571315  474806 out.go:203] 
	W1020 13:18:59.574254  474806 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1020 13:18:59.577153  474806 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1020 13:18:59.580989  474806 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-995203" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 13:18:57 old-k8s-version-995203 crio[837]: time="2025-10-20T13:18:57.068108195Z" level=info msg="Created container 2aa6ab54e2f85826b01a60dc16b2265180032182c2704f9a2978656274a4d279: kube-system/coredns-5dd5756b68-vqvss/coredns" id=21049a77-c59c-426d-9fd3-2785e3189fe8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:18:57 old-k8s-version-995203 crio[837]: time="2025-10-20T13:18:57.069363563Z" level=info msg="Starting container: 2aa6ab54e2f85826b01a60dc16b2265180032182c2704f9a2978656274a4d279" id=bb6c4bd1-2623-4f0f-b81b-1b920106a914 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:18:57 old-k8s-version-995203 crio[837]: time="2025-10-20T13:18:57.071360043Z" level=info msg="Started container" PID=1955 containerID=2aa6ab54e2f85826b01a60dc16b2265180032182c2704f9a2978656274a4d279 description=kube-system/coredns-5dd5756b68-vqvss/coredns id=bb6c4bd1-2623-4f0f-b81b-1b920106a914 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3c064f10a03b267a8f8038c0cd62869adc2771f3f46403cdce96b89ff4b39c8
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.07931518Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f9f60bbd-b80d-4c1f-82aa-1dc49443bce7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.079396658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.08663058Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1bc75ed0c0e991ef2ea6b0964c470ce2e246e2a26938b1d4351c7ad7f5ea910d UID:78b08b5e-c021-42c9-bc66-5ad8d839afc5 NetNS:/var/run/netns/c26dc2c9-d480-4c43-a5e9-e8adeb6439a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002aa2138}] Aliases:map[]}"
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.086873003Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.176557053Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1bc75ed0c0e991ef2ea6b0964c470ce2e246e2a26938b1d4351c7ad7f5ea910d UID:78b08b5e-c021-42c9-bc66-5ad8d839afc5 NetNS:/var/run/netns/c26dc2c9-d480-4c43-a5e9-e8adeb6439a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002aa2138}] Aliases:map[]}"
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.176740161Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.230470712Z" level=info msg="Ran pod sandbox 1bc75ed0c0e991ef2ea6b0964c470ce2e246e2a26938b1d4351c7ad7f5ea910d with infra container: default/busybox/POD" id=f9f60bbd-b80d-4c1f-82aa-1dc49443bce7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.248526383Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4efb6468-b378-4721-af43-48e4f212489f name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.248848176Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4efb6468-b378-4721-af43-48e4f212489f name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.248961022Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4efb6468-b378-4721-af43-48e4f212489f name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.250350521Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=191e5e25-22c6-4ffe-9710-3a8501a69876 name=/runtime.v1.ImageService/PullImage
	Oct 20 13:19:00 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:00.265111332Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.201102598Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=191e5e25-22c6-4ffe-9710-3a8501a69876 name=/runtime.v1.ImageService/PullImage
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.202037896Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5bc70791-517c-4165-b3ea-4c483c55ee8f name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.203323632Z" level=info msg="Creating container: default/busybox/busybox" id=ed55e170-496f-4326-9781-077ff2003051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.203414308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.208780792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.209308331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.230763052Z" level=info msg="Created container 9c54be40c9e623d765a1d6d2d3a2f8397361a7504a766aced60e7a6f80f4f8b8: default/busybox/busybox" id=ed55e170-496f-4326-9781-077ff2003051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.233118955Z" level=info msg="Starting container: 9c54be40c9e623d765a1d6d2d3a2f8397361a7504a766aced60e7a6f80f4f8b8" id=3b80793e-06ec-46c4-9bd9-75da69895b9f name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:19:02 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:02.236822247Z" level=info msg="Started container" PID=2009 containerID=9c54be40c9e623d765a1d6d2d3a2f8397361a7504a766aced60e7a6f80f4f8b8 description=default/busybox/busybox id=3b80793e-06ec-46c4-9bd9-75da69895b9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bc75ed0c0e991ef2ea6b0964c470ce2e246e2a26938b1d4351c7ad7f5ea910d
	Oct 20 13:19:09 old-k8s-version-995203 crio[837]: time="2025-10-20T13:19:09.923492171Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	9c54be40c9e62       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   1bc75ed0c0e99       busybox                                          default
	2aa6ab54e2f85       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago       Running             coredns                   0                   f3c064f10a03b       coredns-5dd5756b68-vqvss                         kube-system
	63472bed1f9bb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   9a64e83176c42       storage-provisioner                              kube-system
	be9c15470ed94       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago       Running             kindnet-cni               0                   300e9f401feab       kindnet-5x5fk                                    kube-system
	a1c77962a9152       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      50 seconds ago       Running             kube-proxy                0                   c22fcd4ed9226       kube-proxy-n8zpg                                 kube-system
	583c8a4fe85b1       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      About a minute ago   Running             kube-controller-manager   0                   4f96dcbead591       kube-controller-manager-old-k8s-version-995203   kube-system
	632fd03d30700       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      About a minute ago   Running             kube-apiserver            0                   967e35a2dae98       kube-apiserver-old-k8s-version-995203            kube-system
	5c8c94320d8ac       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      About a minute ago   Running             kube-scheduler            0                   98fda59d2ec38       kube-scheduler-old-k8s-version-995203            kube-system
	a3779237f878f       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   ab320f9bafaa0       etcd-old-k8s-version-995203                      kube-system
	
	
	==> coredns [2aa6ab54e2f85826b01a60dc16b2265180032182c2704f9a2978656274a4d279] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53665 - 10633 "HINFO IN 5426273168865974220.8266770669651708318. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027222466s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-995203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-995203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=old-k8s-version-995203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_18_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:18:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-995203
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:19:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:19:07 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:19:07 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:19:07 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:19:07 +0000   Mon, 20 Oct 2025 13:18:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-995203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                20e6ba9e-7bcb-4309-8b46-32f70578149b
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-vqvss                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     52s
	  kube-system                 etcd-old-k8s-version-995203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         66s
	  kube-system                 kindnet-5x5fk                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-old-k8s-version-995203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-old-k8s-version-995203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-proxy-n8zpg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-old-k8s-version-995203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 50s   kube-proxy       
	  Normal  Starting                 66s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  66s   kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    66s   kubelet          Node old-k8s-version-995203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     66s   kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s   node-controller  Node old-k8s-version-995203 event: Registered Node old-k8s-version-995203 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-995203 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct20 12:50] overlayfs: idmapped layers are currently not supported
	[Oct20 12:51] overlayfs: idmapped layers are currently not supported
	[Oct20 12:56] overlayfs: idmapped layers are currently not supported
	[Oct20 12:57] overlayfs: idmapped layers are currently not supported
	[Oct20 12:58] overlayfs: idmapped layers are currently not supported
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a3779237f878fc041804dee03436a8d01f84c72c65e6404a5b5ab64feedf9161] <==
	{"level":"info","ts":"2025-10-20T13:17:58.087129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-20T13:17:58.091886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-20T13:17:58.087169Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:17:58.087904Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-20T13:17:58.091653Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T13:17:58.092141Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T13:17:58.092051Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:17:58.843845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-20T13:17:58.844055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-20T13:17:58.8441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-20T13:17:58.844138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-20T13:17:58.844175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-20T13:17:58.844212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-20T13:17:58.844244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-20T13:17:58.845473Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-995203 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-20T13:17:58.845584Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T13:17:58.850727Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-20T13:17:58.850882Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T13:17:58.85244Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T13:17:58.852572Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T13:17:58.855592Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-20T13:17:58.855658Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-20T13:17:58.854556Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T13:17:58.869677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-20T13:17:58.869797Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 13:19:11 up  3:01,  0 user,  load average: 1.69, 3.00, 2.53
	Linux old-k8s-version-995203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be9c15470ed944887063e5b751c7d26aa255aed912d55cd9f8ae55a31720c84b] <==
	I1020 13:18:46.207654       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:18:46.207855       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:18:46.207972       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:18:46.207991       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:18:46.208004       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:18:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:18:46.409715       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:18:46.409751       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:18:46.409760       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:18:46.500617       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 13:18:46.803447       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:18:46.803478       1 metrics.go:72] Registering metrics
	I1020 13:18:46.803544       1 controller.go:711] "Syncing nftables rules"
	I1020 13:18:56.418265       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:18:56.418302       1 main.go:301] handling current node
	I1020 13:19:06.410681       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:19:06.410716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [632fd03d307004b10c5f50dcda541c0b267980f83ece974bcbde2e4a110f2693] <==
	I1020 13:18:02.300070       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1020 13:18:02.316586       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 13:18:02.382008       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1020 13:18:02.383206       1 shared_informer.go:318] Caches are synced for configmaps
	I1020 13:18:02.383830       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1020 13:18:02.383860       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1020 13:18:02.384117       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1020 13:18:02.384132       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1020 13:18:02.387557       1 controller.go:624] quota admission added evaluator for: namespaces
	I1020 13:18:02.435306       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:18:03.186894       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 13:18:03.191249       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 13:18:03.191273       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:18:03.785447       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:18:03.839923       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:18:03.916726       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 13:18:03.923255       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1020 13:18:03.924392       1 controller.go:624] quota admission added evaluator for: endpoints
	I1020 13:18:03.931895       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:18:04.340108       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1020 13:18:05.357581       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1020 13:18:05.371244       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 13:18:05.384342       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1020 13:18:18.836979       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1020 13:18:18.910597       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [583c8a4fe85b15575016ffb63673032c73d14763e84be90d4f47aae278017e93] <==
	I1020 13:18:18.382742       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-995203" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1020 13:18:18.395530       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 13:18:18.434444       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 13:18:18.777989       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 13:18:18.778031       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1020 13:18:18.836031       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 13:18:18.858080       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1020 13:18:18.969012       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-n8zpg"
	I1020 13:18:18.992447       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5x5fk"
	I1020 13:18:19.271042       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4hzgs"
	I1020 13:18:19.315957       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vqvss"
	I1020 13:18:19.334787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="486.803447ms"
	I1020 13:18:19.345552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.65181ms"
	I1020 13:18:19.346716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.005µs"
	I1020 13:18:19.812912       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1020 13:18:19.868886       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-4hzgs"
	I1020 13:18:19.902477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.50847ms"
	I1020 13:18:19.938152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.62654ms"
	I1020 13:18:19.939288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.354µs"
	I1020 13:18:56.664245       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.58µs"
	I1020 13:18:56.680915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.554µs"
	I1020 13:18:57.782763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.919µs"
	I1020 13:18:57.821173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.017884ms"
	I1020 13:18:57.821289       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.339µs"
	I1020 13:18:58.371778       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [a1c77962a9152ce1f8b920633a271acc772f6ee418a5fe4db076dc907a5d41fb] <==
	I1020 13:18:20.892885       1 server_others.go:69] "Using iptables proxy"
	I1020 13:18:20.906609       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1020 13:18:20.931751       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:18:20.933407       1 server_others.go:152] "Using iptables Proxier"
	I1020 13:18:20.933438       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1020 13:18:20.933446       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1020 13:18:20.933482       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1020 13:18:20.933709       1 server.go:846] "Version info" version="v1.28.0"
	I1020 13:18:20.933728       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:18:20.935058       1 config.go:188] "Starting service config controller"
	I1020 13:18:20.935076       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1020 13:18:20.935092       1 config.go:97] "Starting endpoint slice config controller"
	I1020 13:18:20.935095       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1020 13:18:20.935472       1 config.go:315] "Starting node config controller"
	I1020 13:18:20.935487       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1020 13:18:21.036121       1 shared_informer.go:318] Caches are synced for node config
	I1020 13:18:21.036126       1 shared_informer.go:318] Caches are synced for service config
	I1020 13:18:21.036147       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5c8c94320d8ac5a9ebfde05d61bc5956b20e6fc2539ef316ab3d241d8dd305ad] <==
	W1020 13:18:02.651305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1020 13:18:02.651870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1020 13:18:02.651339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1020 13:18:02.651935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1020 13:18:02.651415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1020 13:18:02.652000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1020 13:18:02.651464       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1020 13:18:02.652082       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1020 13:18:02.652316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1020 13:18:02.652407       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1020 13:18:02.652489       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1020 13:18:02.652529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1020 13:18:02.652617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1020 13:18:02.652654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1020 13:18:02.652728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1020 13:18:02.652863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1020 13:18:02.653053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1020 13:18:02.653110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1020 13:18:03.493775       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1020 13:18:03.493826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1020 13:18:03.588096       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1020 13:18:03.588136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1020 13:18:03.833850       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1020 13:18:03.833883       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1020 13:18:05.742121       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 20 13:18:20 old-k8s-version-995203 kubelet[1370]: E1020 13:18:20.162279    1370 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 20 13:18:20 old-k8s-version-995203 kubelet[1370]: E1020 13:18:20.162388    1370 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/28a1c992-7dd6-492b-b991-579f78661803-kube-proxy podName:28a1c992-7dd6-492b-b991-579f78661803 nodeName:}" failed. No retries permitted until 2025-10-20 13:18:20.662362953 +0000 UTC m=+15.348939328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/28a1c992-7dd6-492b-b991-579f78661803-kube-proxy") pod "kube-proxy-n8zpg" (UID: "28a1c992-7dd6-492b-b991-579f78661803") : failed to sync configmap cache: timed out waiting for the condition
	Oct 20 13:18:20 old-k8s-version-995203 kubelet[1370]: E1020 13:18:20.220044    1370 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 20 13:18:20 old-k8s-version-995203 kubelet[1370]: E1020 13:18:20.220095    1370 projected.go:198] Error preparing data for projected volume kube-api-access-tr472 for pod kube-system/kube-proxy-n8zpg: failed to sync configmap cache: timed out waiting for the condition
	Oct 20 13:18:20 old-k8s-version-995203 kubelet[1370]: E1020 13:18:20.220178    1370 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28a1c992-7dd6-492b-b991-579f78661803-kube-api-access-tr472 podName:28a1c992-7dd6-492b-b991-579f78661803 nodeName:}" failed. No retries permitted until 2025-10-20 13:18:20.720156377 +0000 UTC m=+15.406732744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tr472" (UniqueName: "kubernetes.io/projected/28a1c992-7dd6-492b-b991-579f78661803-kube-api-access-tr472") pod "kube-proxy-n8zpg" (UID: "28a1c992-7dd6-492b-b991-579f78661803") : failed to sync configmap cache: timed out waiting for the condition
	Oct 20 13:18:20 old-k8s-version-995203 kubelet[1370]: W1020 13:18:20.805271    1370 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/crio-c22fcd4ed92260e6d5ff6727bd186b44c66071a984fe795db4466760674f18d6 WatchSource:0}: Error finding container c22fcd4ed92260e6d5ff6727bd186b44c66071a984fe795db4466760674f18d6: Status 404 returned error can't find the container with id c22fcd4ed92260e6d5ff6727bd186b44c66071a984fe795db4466760674f18d6
	Oct 20 13:18:21 old-k8s-version-995203 kubelet[1370]: I1020 13:18:21.705173    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n8zpg" podStartSLOduration=3.705091834 podCreationTimestamp="2025-10-20 13:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:18:21.704915273 +0000 UTC m=+16.391491640" watchObservedRunningTime="2025-10-20 13:18:21.705091834 +0000 UTC m=+16.391668209"
	Oct 20 13:18:28 old-k8s-version-995203 kubelet[1370]: E1020 13:18:28.835800    1370 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error; artifact err: get manifest: build image source: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error" image="docker.io/kindest/kindnetd:v20250512-df8de77b"
	Oct 20 13:18:28 old-k8s-version-995203 kubelet[1370]: E1020 13:18:28.835866    1370 kuberuntime_image.go:53] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error; artifact err: get manifest: build image source: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error" image="docker.io/kindest/kindnetd:v20250512-df8de77b"
	Oct 20 13:18:28 old-k8s-version-995203 kubelet[1370]: E1020 13:18:28.836014    1370 kuberuntime_manager.go:1209] container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250512-df8de77b,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni
-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-hbl4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:
nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-5x5fk_kube-system(023a6ce0-c4bb-424e-8283-7fb169e3ead2): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error; artifact err: get manifest: build image source: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error
	Oct 20 13:18:28 old-k8s-version-995203 kubelet[1370]: E1020 13:18:28.836077    1370 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error; artifact err: get manifest: build image source: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 500 Internal Server Error\"" pod="kube-system/kindnet-5x5fk" podUID="023a6ce0-c4bb-424e-8283-7fb169e3ead2"
	Oct 20 13:18:29 old-k8s-version-995203 kubelet[1370]: E1020 13:18:29.707363    1370 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\"\"" pod="kube-system/kindnet-5x5fk" podUID="023a6ce0-c4bb-424e-8283-7fb169e3ead2"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.624622    1370 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.657135    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5x5fk" podStartSLOduration=13.118889586 podCreationTimestamp="2025-10-20 13:18:18 +0000 UTC" firstStartedPulling="2025-10-20 13:18:20.558459701 +0000 UTC m=+15.245036068" lastFinishedPulling="2025-10-20 13:18:46.09666202 +0000 UTC m=+40.783238403" observedRunningTime="2025-10-20 13:18:46.763903119 +0000 UTC m=+41.450479494" watchObservedRunningTime="2025-10-20 13:18:56.657091921 +0000 UTC m=+51.343668296"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.657581    1370 topology_manager.go:215] "Topology Admit Handler" podUID="b7ec10ed-30b5-4af7-ba79-bf7e9a899603" podNamespace="kube-system" podName="coredns-5dd5756b68-vqvss"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.660450    1370 topology_manager.go:215] "Topology Admit Handler" podUID="386e5757-2c12-4037-806b-3451ff6562e2" podNamespace="kube-system" podName="storage-provisioner"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.753121    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/386e5757-2c12-4037-806b-3451ff6562e2-tmp\") pod \"storage-provisioner\" (UID: \"386e5757-2c12-4037-806b-3451ff6562e2\") " pod="kube-system/storage-provisioner"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.753183    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7ec10ed-30b5-4af7-ba79-bf7e9a899603-config-volume\") pod \"coredns-5dd5756b68-vqvss\" (UID: \"b7ec10ed-30b5-4af7-ba79-bf7e9a899603\") " pod="kube-system/coredns-5dd5756b68-vqvss"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.753217    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56c98\" (UniqueName: \"kubernetes.io/projected/b7ec10ed-30b5-4af7-ba79-bf7e9a899603-kube-api-access-56c98\") pod \"coredns-5dd5756b68-vqvss\" (UID: \"b7ec10ed-30b5-4af7-ba79-bf7e9a899603\") " pod="kube-system/coredns-5dd5756b68-vqvss"
	Oct 20 13:18:56 old-k8s-version-995203 kubelet[1370]: I1020 13:18:56.753245    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44pdg\" (UniqueName: \"kubernetes.io/projected/386e5757-2c12-4037-806b-3451ff6562e2-kube-api-access-44pdg\") pod \"storage-provisioner\" (UID: \"386e5757-2c12-4037-806b-3451ff6562e2\") " pod="kube-system/storage-provisioner"
	Oct 20 13:18:57 old-k8s-version-995203 kubelet[1370]: W1020 13:18:57.005413    1370 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/crio-f3c064f10a03b267a8f8038c0cd62869adc2771f3f46403cdce96b89ff4b39c8 WatchSource:0}: Error finding container f3c064f10a03b267a8f8038c0cd62869adc2771f3f46403cdce96b89ff4b39c8: Status 404 returned error can't find the container with id f3c064f10a03b267a8f8038c0cd62869adc2771f3f46403cdce96b89ff4b39c8
	Oct 20 13:18:57 old-k8s-version-995203 kubelet[1370]: I1020 13:18:57.808061    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vqvss" podStartSLOduration=38.808022379 podCreationTimestamp="2025-10-20 13:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:18:57.782354641 +0000 UTC m=+52.468931008" watchObservedRunningTime="2025-10-20 13:18:57.808022379 +0000 UTC m=+52.494598754"
	Oct 20 13:18:57 old-k8s-version-995203 kubelet[1370]: I1020 13:18:57.830006    1370 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=37.829960323 podCreationTimestamp="2025-10-20 13:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:18:57.829425104 +0000 UTC m=+52.516001487" watchObservedRunningTime="2025-10-20 13:18:57.829960323 +0000 UTC m=+52.516536690"
	Oct 20 13:18:59 old-k8s-version-995203 kubelet[1370]: I1020 13:18:59.777131    1370 topology_manager.go:215] "Topology Admit Handler" podUID="78b08b5e-c021-42c9-bc66-5ad8d839afc5" podNamespace="default" podName="busybox"
	Oct 20 13:18:59 old-k8s-version-995203 kubelet[1370]: I1020 13:18:59.877715    1370 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pthmp\" (UniqueName: \"kubernetes.io/projected/78b08b5e-c021-42c9-bc66-5ad8d839afc5-kube-api-access-pthmp\") pod \"busybox\" (UID: \"78b08b5e-c021-42c9-bc66-5ad8d839afc5\") " pod="default/busybox"
	
	
	==> storage-provisioner [63472bed1f9bb5622b10e04c5335791a71fcdc515f5d841cc4e48b0a1afd2af1] <==
	I1020 13:18:57.028122       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:18:57.056286       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:18:57.056517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1020 13:18:57.077233       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:18:57.077433       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c95c54c-44b3-45ac-9717-b9e1fd84bb4f", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-995203_8a3a4d19-dc01-4074-b1ea-a19ca8f45391 became leader
	I1020 13:18:57.085669       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-995203_8a3a4d19-dc01-4074-b1ea-a19ca8f45391!
	I1020 13:18:57.185802       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-995203_8a3a4d19-dc01-4074-b1ea-a19ca8f45391!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-995203 -n old-k8s-version-995203
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-995203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-995203 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-995203 --alsologtostderr -v=1: exit status 80 (2.053453908s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-995203 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:20:23.184902  481112 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:20:23.185071  481112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:20:23.185081  481112 out.go:374] Setting ErrFile to fd 2...
	I1020 13:20:23.185086  481112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:20:23.185347  481112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:20:23.185625  481112 out.go:368] Setting JSON to false
	I1020 13:20:23.185653  481112 mustload.go:65] Loading cluster: old-k8s-version-995203
	I1020 13:20:23.186066  481112 config.go:182] Loaded profile config "old-k8s-version-995203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 13:20:23.186527  481112 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:20:23.205184  481112 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:20:23.205547  481112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:20:23.269701  481112 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-20 13:20:23.259435796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:20:23.270362  481112 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-995203 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 13:20:23.274978  481112 out.go:179] * Pausing node old-k8s-version-995203 ... 
	I1020 13:20:23.277567  481112 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:20:23.277926  481112 ssh_runner.go:195] Run: systemctl --version
	I1020 13:20:23.277979  481112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:20:23.296247  481112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:20:23.399210  481112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:20:23.412727  481112 pause.go:52] kubelet running: true
	I1020 13:20:23.412807  481112 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:20:23.647634  481112 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:20:23.647733  481112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:20:23.725337  481112 cri.go:89] found id: "b9a894360ae536522ed7f07cec598ce802c9de323861ffe9e78dc6dc8622ad05"
	I1020 13:20:23.725402  481112 cri.go:89] found id: "4f3250498e84ac75870e6c8ade992e57ff6eab7f59095c1680c1c603a64e29d2"
	I1020 13:20:23.725414  481112 cri.go:89] found id: "0cb2bbb6a3818fc93bc3dcc1c1a14c46c017bb411392ca6add35c0107efebf19"
	I1020 13:20:23.725418  481112 cri.go:89] found id: "08498359d61f644f4b52ac712ce52b9a566408a317365cb6867d4ae77be3b7a1"
	I1020 13:20:23.725422  481112 cri.go:89] found id: "00f6d21817a9ad327ca249fe819bdc41de35f885521a5955856f753a86b2b56c"
	I1020 13:20:23.725426  481112 cri.go:89] found id: "52a161091b397fb4b3f3af4bafa726d8d45a44d17cf647d542ccdab6bd1b0daf"
	I1020 13:20:23.725430  481112 cri.go:89] found id: "e8b5d4c1732bb4b651f2ce3c3e2b44ffd50e135a1921aa5edf3ec4e3acb343a4"
	I1020 13:20:23.725433  481112 cri.go:89] found id: "5392efc3a2e72c11b9ba1d3e8474612440b8297d93e158efe009f84187741706"
	I1020 13:20:23.725436  481112 cri.go:89] found id: "7cdb1584428d91650e965692588e4339e2de21c156b9c94681fc8108ca04cfc3"
	I1020 13:20:23.725443  481112 cri.go:89] found id: "e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f"
	I1020 13:20:23.725447  481112 cri.go:89] found id: "d70e7a2fe364404b1bf3b0bb1c7eff9af141659e4d5c48c62445a77a433eec2e"
	I1020 13:20:23.725451  481112 cri.go:89] found id: ""
	I1020 13:20:23.725508  481112 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:20:23.736596  481112 retry.go:31] will retry after 322.242722ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:20:23Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:20:24.059127  481112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:20:24.073224  481112 pause.go:52] kubelet running: false
	I1020 13:20:24.073311  481112 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:20:24.256928  481112 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:20:24.257003  481112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:20:24.329003  481112 cri.go:89] found id: "b9a894360ae536522ed7f07cec598ce802c9de323861ffe9e78dc6dc8622ad05"
	I1020 13:20:24.329026  481112 cri.go:89] found id: "4f3250498e84ac75870e6c8ade992e57ff6eab7f59095c1680c1c603a64e29d2"
	I1020 13:20:24.329047  481112 cri.go:89] found id: "0cb2bbb6a3818fc93bc3dcc1c1a14c46c017bb411392ca6add35c0107efebf19"
	I1020 13:20:24.329052  481112 cri.go:89] found id: "08498359d61f644f4b52ac712ce52b9a566408a317365cb6867d4ae77be3b7a1"
	I1020 13:20:24.329055  481112 cri.go:89] found id: "00f6d21817a9ad327ca249fe819bdc41de35f885521a5955856f753a86b2b56c"
	I1020 13:20:24.329058  481112 cri.go:89] found id: "52a161091b397fb4b3f3af4bafa726d8d45a44d17cf647d542ccdab6bd1b0daf"
	I1020 13:20:24.329061  481112 cri.go:89] found id: "e8b5d4c1732bb4b651f2ce3c3e2b44ffd50e135a1921aa5edf3ec4e3acb343a4"
	I1020 13:20:24.329064  481112 cri.go:89] found id: "5392efc3a2e72c11b9ba1d3e8474612440b8297d93e158efe009f84187741706"
	I1020 13:20:24.329068  481112 cri.go:89] found id: "7cdb1584428d91650e965692588e4339e2de21c156b9c94681fc8108ca04cfc3"
	I1020 13:20:24.329078  481112 cri.go:89] found id: "e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f"
	I1020 13:20:24.329084  481112 cri.go:89] found id: "d70e7a2fe364404b1bf3b0bb1c7eff9af141659e4d5c48c62445a77a433eec2e"
	I1020 13:20:24.329088  481112 cri.go:89] found id: ""
	I1020 13:20:24.329137  481112 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:20:24.340233  481112 retry.go:31] will retry after 530.768988ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:20:24Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:20:24.871559  481112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:20:24.884240  481112 pause.go:52] kubelet running: false
	I1020 13:20:24.884306  481112 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:20:25.074318  481112 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:20:25.074408  481112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:20:25.153036  481112 cri.go:89] found id: "b9a894360ae536522ed7f07cec598ce802c9de323861ffe9e78dc6dc8622ad05"
	I1020 13:20:25.153060  481112 cri.go:89] found id: "4f3250498e84ac75870e6c8ade992e57ff6eab7f59095c1680c1c603a64e29d2"
	I1020 13:20:25.153065  481112 cri.go:89] found id: "0cb2bbb6a3818fc93bc3dcc1c1a14c46c017bb411392ca6add35c0107efebf19"
	I1020 13:20:25.153069  481112 cri.go:89] found id: "08498359d61f644f4b52ac712ce52b9a566408a317365cb6867d4ae77be3b7a1"
	I1020 13:20:25.153073  481112 cri.go:89] found id: "00f6d21817a9ad327ca249fe819bdc41de35f885521a5955856f753a86b2b56c"
	I1020 13:20:25.153076  481112 cri.go:89] found id: "52a161091b397fb4b3f3af4bafa726d8d45a44d17cf647d542ccdab6bd1b0daf"
	I1020 13:20:25.153080  481112 cri.go:89] found id: "e8b5d4c1732bb4b651f2ce3c3e2b44ffd50e135a1921aa5edf3ec4e3acb343a4"
	I1020 13:20:25.153083  481112 cri.go:89] found id: "5392efc3a2e72c11b9ba1d3e8474612440b8297d93e158efe009f84187741706"
	I1020 13:20:25.153086  481112 cri.go:89] found id: "7cdb1584428d91650e965692588e4339e2de21c156b9c94681fc8108ca04cfc3"
	I1020 13:20:25.153092  481112 cri.go:89] found id: "e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f"
	I1020 13:20:25.153096  481112 cri.go:89] found id: "d70e7a2fe364404b1bf3b0bb1c7eff9af141659e4d5c48c62445a77a433eec2e"
	I1020 13:20:25.153100  481112 cri.go:89] found id: ""
	I1020 13:20:25.153153  481112 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:20:25.169393  481112 out.go:203] 
	W1020 13:20:25.172312  481112 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:20:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:20:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 13:20:25.172338  481112 out.go:285] * 
	* 
	W1020 13:20:25.179824  481112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 13:20:25.182721  481112 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-995203 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-995203
helpers_test.go:243: (dbg) docker inspect old-k8s-version-995203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743",
	        "Created": "2025-10-20T13:17:39.717282575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478478,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:19:25.013777837Z",
	            "FinishedAt": "2025-10-20T13:19:24.155964442Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/hosts",
	        "LogPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743-json.log",
	        "Name": "/old-k8s-version-995203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-995203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-995203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743",
	                "LowerDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-995203",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-995203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-995203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-995203",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-995203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da803722a71115b58020b6663177b03eae17c4142671af0a9dca7d72fb2c1dad",
	            "SandboxKey": "/var/run/docker/netns/da803722a711",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-995203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:18:25:df:e2:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e48fc4c3ab83a2a7d44a282549d8182e6b6d0f2aee11543e9c45f4ee745a84b",
	                    "EndpointID": "867d45fc36b44321d83f981dbc56271ba3a8d99277fa22247031f910d7716b53",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-995203",
	                        "bc62e325c2a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203: exit status 2 (343.788754ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-995203 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-995203 logs -n 25: (1.272218556s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-308474 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo containerd config dump                                                                                                                                                                                                  │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo crio config                                                                                                                                                                                                             │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ delete  │ -p cilium-308474                                                                                                                                                                                                                              │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p force-systemd-env-534257 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-534257  │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ delete  │ -p force-systemd-env-534257                                                                                                                                                                                                                   │ force-systemd-env-534257  │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-066011    │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ delete  │ -p kubernetes-upgrade-314577                                                                                                                                                                                                                  │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p cert-options-123220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ cert-options-123220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ -p cert-options-123220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ delete  │ -p cert-options-123220                                                                                                                                                                                                                        │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ stop    │ -p old-k8s-version-995203 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-995203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:19:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:19:30.440336  479219 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:19:30.440602  479219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:19:30.440607  479219 out.go:374] Setting ErrFile to fd 2...
	I1020 13:19:30.440610  479219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:19:30.440888  479219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:19:30.441271  479219 out.go:368] Setting JSON to false
	I1020 13:19:30.442258  479219 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10921,"bootTime":1760955450,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:19:30.442317  479219 start.go:141] virtualization:  
	I1020 13:19:30.446075  479219 out.go:179] * [cert-expiration-066011] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:19:30.449118  479219 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:19:30.449227  479219 notify.go:220] Checking for updates...
	I1020 13:19:30.455091  479219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:19:30.458235  479219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:19:30.461321  479219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:19:30.464490  479219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:19:30.467453  479219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:19:30.470899  479219 config.go:182] Loaded profile config "cert-expiration-066011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:19:30.471533  479219 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:19:30.520507  479219 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:19:30.520613  479219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:19:30.608629  479219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-20 13:19:30.597958897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:19:30.608716  479219 docker.go:318] overlay module found
	I1020 13:19:30.611652  479219 out.go:179] * Using the docker driver based on existing profile
	I1020 13:19:30.614450  479219 start.go:305] selected driver: docker
	I1020 13:19:30.614460  479219 start.go:925] validating driver "docker" against &{Name:cert-expiration-066011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-066011 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:19:30.614560  479219 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:19:30.615247  479219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:19:30.703060  479219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-20 13:19:30.689635864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:19:30.703355  479219 cni.go:84] Creating CNI manager for ""
	I1020 13:19:30.703416  479219 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:19:30.703450  479219 start.go:349] cluster config:
	{Name:cert-expiration-066011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-066011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1020 13:19:30.706643  479219 out.go:179] * Starting "cert-expiration-066011" primary control-plane node in "cert-expiration-066011" cluster
	I1020 13:19:30.709630  479219 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:19:30.712528  479219 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:19:30.715420  479219 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:19:30.715467  479219 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:19:30.715478  479219 cache.go:58] Caching tarball of preloaded images
	I1020 13:19:30.715590  479219 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:19:30.715598  479219 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:19:30.715706  479219 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/cert-expiration-066011/config.json ...
	I1020 13:19:30.715943  479219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:19:30.760569  479219 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:19:30.760581  479219 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:19:30.760595  479219 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:19:30.760622  479219 start.go:360] acquireMachinesLock for cert-expiration-066011: {Name:mkfa484931163ca74c18f33f3fb3d9634523330e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:19:30.760675  479219 start.go:364] duration metric: took 37.145µs to acquireMachinesLock for "cert-expiration-066011"
	I1020 13:19:30.760692  479219 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:19:30.760697  479219 fix.go:54] fixHost starting: 
	I1020 13:19:30.760962  479219 cli_runner.go:164] Run: docker container inspect cert-expiration-066011 --format={{.State.Status}}
	I1020 13:19:30.791563  479219 fix.go:112] recreateIfNeeded on cert-expiration-066011: state=Running err=<nil>
	W1020 13:19:30.791589  479219 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 13:19:29.732731  478350 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:19:29.736487  478350 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:19:29.736518  478350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:19:29.736531  478350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:19:29.736587  478350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:19:29.736677  478350 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:19:29.736805  478350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:19:29.744568  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:19:29.764222  478350 start.go:296] duration metric: took 162.567461ms for postStartSetup
	I1020 13:19:29.764325  478350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:19:29.764436  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:29.782860  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:29.885706  478350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:19:29.890381  478350 fix.go:56] duration metric: took 4.948170086s for fixHost
	I1020 13:19:29.890404  478350 start.go:83] releasing machines lock for "old-k8s-version-995203", held for 4.94822251s
	I1020 13:19:29.890472  478350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-995203
	I1020 13:19:29.909473  478350 ssh_runner.go:195] Run: cat /version.json
	I1020 13:19:29.909522  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:29.909569  478350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:19:29.909622  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:29.936196  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:29.946230  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:30.131952  478350 ssh_runner.go:195] Run: systemctl --version
	I1020 13:19:30.139453  478350 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:19:30.180463  478350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:19:30.186266  478350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:19:30.186347  478350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:19:30.194854  478350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:19:30.194931  478350 start.go:495] detecting cgroup driver to use...
	I1020 13:19:30.194982  478350 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:19:30.195062  478350 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:19:30.211678  478350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:19:30.225325  478350 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:19:30.225416  478350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:19:30.241858  478350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:19:30.255443  478350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:19:30.381273  478350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:19:30.569731  478350 docker.go:234] disabling docker service ...
	I1020 13:19:30.569801  478350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:19:30.591130  478350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:19:30.608640  478350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:19:30.784194  478350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:19:30.975684  478350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:19:30.989229  478350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:19:31.013948  478350 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1020 13:19:31.014025  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.024316  478350 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:19:31.024464  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.035823  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.046586  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.061230  478350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:19:31.071521  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.089642  478350 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.106599  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.119087  478350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:19:31.129379  478350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:19:31.138537  478350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:31.316419  478350 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:19:31.486985  478350 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:19:31.487057  478350 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:19:31.494591  478350 start.go:563] Will wait 60s for crictl version
	I1020 13:19:31.494656  478350 ssh_runner.go:195] Run: which crictl
	I1020 13:19:31.499689  478350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:19:31.547438  478350 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:19:31.547618  478350 ssh_runner.go:195] Run: crio --version
	I1020 13:19:31.582864  478350 ssh_runner.go:195] Run: crio --version
	I1020 13:19:31.619157  478350 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1020 13:19:31.621976  478350 cli_runner.go:164] Run: docker network inspect old-k8s-version-995203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:19:31.644246  478350 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:19:31.648404  478350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:19:31.662286  478350 kubeadm.go:883] updating cluster {Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:19:31.662402  478350 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 13:19:31.662452  478350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:19:31.715145  478350 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:19:31.715221  478350 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:19:31.715312  478350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:19:31.745096  478350 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:19:31.745115  478350 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:19:31.745124  478350 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1020 13:19:31.745225  478350 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-995203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:19:31.745304  478350 ssh_runner.go:195] Run: crio config
	I1020 13:19:31.834831  478350 cni.go:84] Creating CNI manager for ""
	I1020 13:19:31.834855  478350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:19:31.834874  478350 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:19:31.834899  478350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-995203 NodeName:old-k8s-version-995203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:19:31.835036  478350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-995203"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:19:31.835112  478350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1020 13:19:31.843591  478350 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:19:31.843664  478350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:19:31.852106  478350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1020 13:19:31.870811  478350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:19:31.885053  478350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1020 13:19:31.900206  478350 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:19:31.910294  478350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:19:31.921032  478350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:32.074914  478350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:19:32.092389  478350 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203 for IP: 192.168.76.2
	I1020 13:19:32.092408  478350 certs.go:195] generating shared ca certs ...
	I1020 13:19:32.092424  478350 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:32.092579  478350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:19:32.092620  478350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:19:32.092627  478350 certs.go:257] generating profile certs ...
	I1020 13:19:32.092712  478350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.key
	I1020 13:19:32.092773  478350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key.8c7cc26d
	I1020 13:19:32.092816  478350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key
	I1020 13:19:32.092929  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:19:32.092964  478350 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:19:32.092972  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:19:32.092996  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:19:32.093019  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:19:32.093041  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:19:32.093082  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:19:32.093671  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:19:32.154106  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:19:32.220337  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:19:32.281947  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:19:32.341393  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1020 13:19:32.379078  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:19:32.404683  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:19:32.429177  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 13:19:32.454524  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:19:32.480776  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:19:32.500691  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:19:32.528567  478350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:19:32.544151  478350 ssh_runner.go:195] Run: openssl version
	I1020 13:19:32.554777  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:19:32.564217  478350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:19:32.569116  478350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:19:32.569194  478350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:19:32.618381  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:19:32.628456  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:19:32.638772  478350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:19:32.643746  478350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:19:32.643817  478350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:19:32.707652  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:19:32.722180  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:19:32.737927  478350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:19:32.743850  478350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:19:32.743944  478350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:19:32.821670  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:19:32.841039  478350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:19:32.853874  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:19:32.952046  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:19:33.035458  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:19:33.109165  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:19:33.207876  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:19:33.279421  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:19:33.337085  478350 kubeadm.go:400] StartCluster: {Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:19:33.337232  478350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:19:33.337328  478350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:19:33.399333  478350 cri.go:89] found id: "52a161091b397fb4b3f3af4bafa726d8d45a44d17cf647d542ccdab6bd1b0daf"
	I1020 13:19:33.399408  478350 cri.go:89] found id: "e8b5d4c1732bb4b651f2ce3c3e2b44ffd50e135a1921aa5edf3ec4e3acb343a4"
	I1020 13:19:33.399443  478350 cri.go:89] found id: "5392efc3a2e72c11b9ba1d3e8474612440b8297d93e158efe009f84187741706"
	I1020 13:19:33.399476  478350 cri.go:89] found id: "7cdb1584428d91650e965692588e4339e2de21c156b9c94681fc8108ca04cfc3"
	I1020 13:19:33.399526  478350 cri.go:89] found id: ""
	I1020 13:19:33.399623  478350 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:19:33.432928  478350 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:19:33Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:19:33.433080  478350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:19:33.452621  478350 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:19:33.452692  478350 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:19:33.452776  478350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:19:33.466856  478350 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:19:33.467575  478350 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-995203" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:19:33.467936  478350 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-995203" cluster setting kubeconfig missing "old-k8s-version-995203" context setting]
	I1020 13:19:33.468537  478350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:33.470344  478350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:19:33.487281  478350 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1020 13:19:33.487363  478350 kubeadm.go:601] duration metric: took 34.652232ms to restartPrimaryControlPlane
	I1020 13:19:33.487386  478350 kubeadm.go:402] duration metric: took 150.310789ms to StartCluster
	I1020 13:19:33.487431  478350 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:33.487536  478350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:19:33.488605  478350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:33.488902  478350 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:19:33.489510  478350 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:19:33.489607  478350 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-995203"
	I1020 13:19:33.489623  478350 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-995203"
	W1020 13:19:33.489629  478350 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:19:33.489651  478350 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:19:33.490161  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.490322  478350 config.go:182] Loaded profile config "old-k8s-version-995203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 13:19:33.490408  478350 addons.go:69] Setting dashboard=true in profile "old-k8s-version-995203"
	I1020 13:19:33.490436  478350 addons.go:238] Setting addon dashboard=true in "old-k8s-version-995203"
	W1020 13:19:33.490458  478350 addons.go:247] addon dashboard should already be in state true
	I1020 13:19:33.490506  478350 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:19:33.490990  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.491368  478350 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-995203"
	I1020 13:19:33.491385  478350 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-995203"
	I1020 13:19:33.491655  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.495752  478350 out.go:179] * Verifying Kubernetes components...
	I1020 13:19:33.498650  478350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:33.556213  478350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:19:33.556213  478350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:19:33.559903  478350 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:19:33.559923  478350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:19:33.559993  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:33.563236  478350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 13:19:33.564943  478350 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-995203"
	W1020 13:19:33.564964  478350 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:19:33.564988  478350 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:19:33.565389  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.572432  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:19:33.572459  478350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:19:33.572543  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:33.622883  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:33.627883  478350 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:19:33.627911  478350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:19:33.627985  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:33.650648  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:33.661488  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:33.856280  478350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:19:33.866833  478350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:19:33.896568  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:19:33.896644  478350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:19:33.911482  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:19:33.911563  478350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:19:33.912281  478350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:19:33.938931  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:19:33.939008  478350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:19:34.007472  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:19:34.007567  478350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:19:34.073024  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:19:34.073108  478350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:19:34.136679  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:19:34.136759  478350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:19:34.184583  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:19:34.184661  478350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:19:34.198421  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:19:34.198499  478350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:19:34.214806  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:19:34.214890  478350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:19:34.230014  478350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:19:30.794800  479219 out.go:252] * Updating the running docker "cert-expiration-066011" container ...
	I1020 13:19:30.794853  479219 machine.go:93] provisionDockerMachine start ...
	I1020 13:19:30.795012  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:30.828127  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:30.828468  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:30.828475  479219 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:19:31.004086  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-066011
	
	I1020 13:19:31.004108  479219 ubuntu.go:182] provisioning hostname "cert-expiration-066011"
	I1020 13:19:31.004193  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:31.030220  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:31.030527  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:31.030536  479219 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-066011 && echo "cert-expiration-066011" | sudo tee /etc/hostname
	I1020 13:19:31.204477  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-066011
	
	I1020 13:19:31.204559  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:31.231734  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:31.232040  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:31.232054  479219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-066011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-066011/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-066011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:19:31.396969  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:19:31.396985  479219 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:19:31.397011  479219 ubuntu.go:190] setting up certificates
	I1020 13:19:31.397029  479219 provision.go:84] configureAuth start
	I1020 13:19:31.397116  479219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-066011
	I1020 13:19:31.422167  479219 provision.go:143] copyHostCerts
	I1020 13:19:31.422228  479219 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:19:31.422244  479219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:19:31.422322  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:19:31.422443  479219 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:19:31.422448  479219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:19:31.422475  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:19:31.422534  479219 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:19:31.422537  479219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:19:31.422559  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:19:31.422650  479219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-066011 san=[127.0.0.1 192.168.85.2 cert-expiration-066011 localhost minikube]
	I1020 13:19:32.375177  479219 provision.go:177] copyRemoteCerts
	I1020 13:19:32.375231  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:19:32.375268  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:32.398842  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:32.519910  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 13:19:32.549708  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:19:32.576982  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1020 13:19:32.601976  479219 provision.go:87] duration metric: took 1.204922848s to configureAuth
	I1020 13:19:32.601994  479219 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:19:32.602177  479219 config.go:182] Loaded profile config "cert-expiration-066011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:19:32.602284  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:32.628254  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:32.628603  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:32.628622  479219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:19:38.119753  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:19:38.119765  479219 machine.go:96] duration metric: took 7.324905293s to provisionDockerMachine
	I1020 13:19:38.119774  479219 start.go:293] postStartSetup for "cert-expiration-066011" (driver="docker")
	I1020 13:19:38.119784  479219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:19:38.119845  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:19:38.119938  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.148504  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.273668  479219 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:19:38.278462  479219 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:19:38.278480  479219 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:19:38.278490  479219 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:19:38.278549  479219 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:19:38.278627  479219 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:19:38.278730  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:19:38.292548  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:19:38.325479  479219 start.go:296] duration metric: took 205.689549ms for postStartSetup
	I1020 13:19:38.325552  479219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:19:38.325607  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.352889  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.492290  479219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:19:38.501014  479219 fix.go:56] duration metric: took 7.740308614s for fixHost
	I1020 13:19:38.501029  479219 start.go:83] releasing machines lock for "cert-expiration-066011", held for 7.740346842s
	I1020 13:19:38.501113  479219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-066011
	I1020 13:19:38.535406  479219 ssh_runner.go:195] Run: cat /version.json
	I1020 13:19:38.535434  479219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:19:38.535454  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.535536  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.576533  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.578850  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.712705  479219 ssh_runner.go:195] Run: systemctl --version
	I1020 13:19:38.838421  479219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:19:38.937759  479219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:19:38.949002  479219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:19:38.949072  479219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:19:38.960859  479219 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:19:38.960888  479219 start.go:495] detecting cgroup driver to use...
	I1020 13:19:38.960918  479219 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:19:38.960985  479219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:19:38.982947  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:19:39.003288  479219 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:19:39.003345  479219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:19:39.035633  479219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:19:39.060033  479219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:19:39.342970  479219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:19:39.587558  479219 docker.go:234] disabling docker service ...
	I1020 13:19:39.587629  479219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:19:39.605834  479219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:19:39.622264  479219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:19:39.862955  479219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:19:40.145915  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:19:40.168690  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:19:40.198551  479219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:19:40.198642  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.215442  479219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:19:40.215550  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.231596  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.244972  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.261538  479219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:19:40.279190  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.291132  479219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.302192  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.320926  479219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:19:40.331574  479219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:19:40.340778  479219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:40.839440  478350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.983076962s)
	I1020 13:19:40.839510  478350 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.9726072s)
	I1020 13:19:40.839540  478350 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-995203" to be "Ready" ...
	I1020 13:19:40.839871  478350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.92753474s)
	I1020 13:19:40.867723  478350 node_ready.go:49] node "old-k8s-version-995203" is "Ready"
	I1020 13:19:40.867758  478350 node_ready.go:38] duration metric: took 28.194435ms for node "old-k8s-version-995203" to be "Ready" ...
	I1020 13:19:40.867777  478350 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:19:40.867840  478350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:19:41.314399  478350 api_server.go:72] duration metric: took 7.825430877s to wait for apiserver process to appear ...
	I1020 13:19:41.314426  478350 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:19:41.314457  478350 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:19:41.315355  478350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.08523823s)
	I1020 13:19:41.318413  478350 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-995203 addons enable metrics-server
	
	I1020 13:19:41.321426  478350 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1020 13:19:41.324440  478350 addons.go:514] duration metric: took 7.834918563s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1020 13:19:41.326144  478350 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:19:41.327843  478350 api_server.go:141] control plane version: v1.28.0
	I1020 13:19:41.327873  478350 api_server.go:131] duration metric: took 13.439269ms to wait for apiserver health ...
	I1020 13:19:41.327883  478350 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:19:41.332119  478350 system_pods.go:59] 8 kube-system pods found
	I1020 13:19:41.332151  478350 system_pods.go:61] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:19:41.332182  478350 system_pods.go:61] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:19:41.332196  478350 system_pods.go:61] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:19:41.332204  478350 system_pods.go:61] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:19:41.332223  478350 system_pods.go:61] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:19:41.332232  478350 system_pods.go:61] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:19:41.332239  478350 system_pods.go:61] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:19:41.332247  478350 system_pods.go:61] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Running
	I1020 13:19:41.332253  478350 system_pods.go:74] duration metric: took 4.363756ms to wait for pod list to return data ...
	I1020 13:19:41.332264  478350 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:19:41.334632  478350 default_sa.go:45] found service account: "default"
	I1020 13:19:41.334656  478350 default_sa.go:55] duration metric: took 2.385991ms for default service account to be created ...
	I1020 13:19:41.334666  478350 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:19:41.338793  478350 system_pods.go:86] 8 kube-system pods found
	I1020 13:19:41.338842  478350 system_pods.go:89] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:19:41.338852  478350 system_pods.go:89] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:19:41.338859  478350 system_pods.go:89] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:19:41.338867  478350 system_pods.go:89] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:19:41.338874  478350 system_pods.go:89] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:19:41.338884  478350 system_pods.go:89] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:19:41.338891  478350 system_pods.go:89] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:19:41.338903  478350 system_pods.go:89] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Running
	I1020 13:19:41.338911  478350 system_pods.go:126] duration metric: took 4.223184ms to wait for k8s-apps to be running ...
	I1020 13:19:41.338929  478350 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:19:41.338989  478350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:19:41.354668  478350 system_svc.go:56] duration metric: took 15.729062ms WaitForService to wait for kubelet
	I1020 13:19:41.354743  478350 kubeadm.go:586] duration metric: took 7.865779018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:19:41.354777  478350 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:19:41.357851  478350 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:19:41.357883  478350 node_conditions.go:123] node cpu capacity is 2
	I1020 13:19:41.357896  478350 node_conditions.go:105] duration metric: took 3.096614ms to run NodePressure ...
	I1020 13:19:41.357929  478350 start.go:241] waiting for startup goroutines ...
	I1020 13:19:41.357944  478350 start.go:246] waiting for cluster config update ...
	I1020 13:19:41.357967  478350 start.go:255] writing updated cluster config ...
	I1020 13:19:41.358262  478350 ssh_runner.go:195] Run: rm -f paused
	I1020 13:19:41.362174  478350 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:19:41.366817  478350 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vqvss" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 13:19:43.373354  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	I1020 13:19:40.620889  479219 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1020 13:19:45.376313  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:47.872251  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:49.872881  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:52.374288  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:54.873175  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:56.873494  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:58.878880  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:01.373258  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:03.873000  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:06.372567  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:08.372969  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	I1020 13:20:09.873097  478350 pod_ready.go:94] pod "coredns-5dd5756b68-vqvss" is "Ready"
	I1020 13:20:09.873126  478350 pod_ready.go:86] duration metric: took 28.506281809s for pod "coredns-5dd5756b68-vqvss" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.876419  478350 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.881362  478350 pod_ready.go:94] pod "etcd-old-k8s-version-995203" is "Ready"
	I1020 13:20:09.881390  478350 pod_ready.go:86] duration metric: took 4.943324ms for pod "etcd-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.885177  478350 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.890138  478350 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-995203" is "Ready"
	I1020 13:20:09.890168  478350 pod_ready.go:86] duration metric: took 4.963846ms for pod "kube-apiserver-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.893243  478350 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.071155  478350 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-995203" is "Ready"
	I1020 13:20:10.071185  478350 pod_ready.go:86] duration metric: took 177.905641ms for pod "kube-controller-manager-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.271933  478350 pod_ready.go:83] waiting for pod "kube-proxy-n8zpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.670936  478350 pod_ready.go:94] pod "kube-proxy-n8zpg" is "Ready"
	I1020 13:20:10.670963  478350 pod_ready.go:86] duration metric: took 399.008521ms for pod "kube-proxy-n8zpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.871824  478350 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:11.270878  478350 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-995203" is "Ready"
	I1020 13:20:11.270905  478350 pod_ready.go:86] duration metric: took 399.058179ms for pod "kube-scheduler-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:11.270917  478350 pod_ready.go:40] duration metric: took 29.908707887s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:20:11.334256  478350 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1020 13:20:11.335704  478350 out.go:203] 
	W1020 13:20:11.336913  478350 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1020 13:20:11.338044  478350 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1020 13:20:11.339164  478350 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-995203" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.369300914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.37700704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.378542395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.393897882Z" level=info msg="Created container e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn/dashboard-metrics-scraper" id=49ac2893-1d51-4e71-a536-e10a4f48ddf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.395172302Z" level=info msg="Starting container: e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f" id=a6eb685a-5df0-47ed-95a9-3b98b8dd3eb2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.397422374Z" level=info msg="Started container" PID=1664 containerID=e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn/dashboard-metrics-scraper id=a6eb685a-5df0-47ed-95a9-3b98b8dd3eb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4
	Oct 20 13:20:12 old-k8s-version-995203 conmon[1662]: conmon e95dfbb74add44c7037f <ninfo>: container 1664 exited with status 1
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.551527752Z" level=info msg="Removing container: 9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645" id=4381a243-c9a0-4ac7-8f21-ec55e0d32fb0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.562226802Z" level=info msg="Error loading conmon cgroup of container 9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645: cgroup deleted" id=4381a243-c9a0-4ac7-8f21-ec55e0d32fb0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.56552382Z" level=info msg="Removed container 9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn/dashboard-metrics-scraper" id=4381a243-c9a0-4ac7-8f21-ec55e0d32fb0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.220969717Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.225819978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.22585682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.225879729Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.229110154Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.229147143Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.229169683Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.23243082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.232463682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.232483785Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.235621835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.235655206Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.235677294Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.238892967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.238928192Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e95dfbb74add4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   030ebf84f8616       dashboard-metrics-scraper-5f989dc9cf-dxgsn       kubernetes-dashboard
	b9a894360ae53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           16 seconds ago      Running             storage-provisioner         2                   fe081971236e6       storage-provisioner                              kube-system
	d70e7a2fe3644       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   26 seconds ago      Running             kubernetes-dashboard        0                   6de05bc792eb3       kubernetes-dashboard-8694d4445c-72xxb            kubernetes-dashboard
	c93749c765311       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           47 seconds ago      Running             busybox                     1                   23f4b5b4e5eb4       busybox                                          default
	4f3250498e84a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           47 seconds ago      Running             coredns                     1                   8abbe1203b1d4       coredns-5dd5756b68-vqvss                         kube-system
	0cb2bbb6a3818       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           47 seconds ago      Running             kindnet-cni                 1                   a819849845459       kindnet-5x5fk                                    kube-system
	08498359d61f6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           47 seconds ago      Exited              storage-provisioner         1                   fe081971236e6       storage-provisioner                              kube-system
	00f6d21817a9a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           47 seconds ago      Running             kube-proxy                  1                   1cd8feccebe11       kube-proxy-n8zpg                                 kube-system
	52a161091b397       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           53 seconds ago      Running             kube-controller-manager     1                   faf80dc3ea1af       kube-controller-manager-old-k8s-version-995203   kube-system
	e8b5d4c1732bb       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           53 seconds ago      Running             kube-scheduler              1                   3c883cc50496f       kube-scheduler-old-k8s-version-995203            kube-system
	5392efc3a2e72       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           53 seconds ago      Running             kube-apiserver              1                   4d95d5201ebdb       kube-apiserver-old-k8s-version-995203            kube-system
	7cdb1584428d9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           53 seconds ago      Running             etcd                        1                   f5c27ab73c8bd       etcd-old-k8s-version-995203                      kube-system
	
	
	==> coredns [4f3250498e84ac75870e6c8ade992e57ff6eab7f59095c1680c1c603a64e29d2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54721 - 18206 "HINFO IN 3026438341382897215.8024713103325109988. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051909799s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-995203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-995203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=old-k8s-version-995203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_18_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:18:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-995203
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:20:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:18:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-995203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                20e6ba9e-7bcb-4309-8b46-32f70578149b
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-vqvss                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m7s
	  kube-system                 etcd-old-k8s-version-995203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-5x5fk                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m8s
	  kube-system                 kube-apiserver-old-k8s-version-995203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-old-k8s-version-995203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-n8zpg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-scheduler-old-k8s-version-995203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dxgsn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-72xxb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m5s               kube-proxy       
	  Normal  Starting                 46s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m21s              kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s              kubelet          Node old-k8s-version-995203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s              kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m21s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m8s               node-controller  Node old-k8s-version-995203 event: Registered Node old-k8s-version-995203 in Controller
	  Normal  NodeReady                90s                kubelet          Node old-k8s-version-995203 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-995203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                node-controller  Node old-k8s-version-995203 event: Registered Node old-k8s-version-995203 in Controller
	
	
	==> dmesg <==
	[Oct20 12:51] overlayfs: idmapped layers are currently not supported
	[Oct20 12:56] overlayfs: idmapped layers are currently not supported
	[Oct20 12:57] overlayfs: idmapped layers are currently not supported
	[Oct20 12:58] overlayfs: idmapped layers are currently not supported
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7cdb1584428d91650e965692588e4339e2de21c156b9c94681fc8108ca04cfc3] <==
	{"level":"info","ts":"2025-10-20T13:19:33.340397Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T13:19:33.340477Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T13:19:33.340889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-20T13:19:33.341032Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-20T13:19:33.343648Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T13:19:33.343789Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T13:19:33.355953Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-20T13:19:33.364105Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:19:33.364231Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:19:33.380605Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-20T13:19:33.380658Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-20T13:19:34.3644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-20T13:19:34.364448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-20T13:19:34.364477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-20T13:19:34.36449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.364507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.364517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.364525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.372571Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-995203 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-20T13:19:34.37262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T13:19:34.373579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-20T13:19:34.373747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T13:19:34.374613Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-20T13:19:34.375091Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-20T13:19:34.375142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:20:26 up  3:02,  0 user,  load average: 1.47, 2.68, 2.45
	Linux old-k8s-version-995203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0cb2bbb6a3818fc93bc3dcc1c1a14c46c017bb411392ca6add35c0107efebf19] <==
	I1020 13:19:39.019177       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:19:39.024814       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:19:39.024973       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:19:39.024987       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:19:39.024998       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:19:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:19:39.219296       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:19:39.219417       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:19:39.219453       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:19:39.220343       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:20:09.220187       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:20:09.220215       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:20:09.220327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 13:20:09.220487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1020 13:20:10.719699       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:20:10.719731       1 metrics.go:72] Registering metrics
	I1020 13:20:10.719784       1 controller.go:711] "Syncing nftables rules"
	I1020 13:20:19.220643       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:20:19.220679       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5392efc3a2e72c11b9ba1d3e8474612440b8297d93e158efe009f84187741706] <==
	I1020 13:19:38.214921       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1020 13:19:38.245614       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1020 13:19:38.245983       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:19:38.252262       1 shared_informer.go:318] Caches are synced for configmaps
	I1020 13:19:38.252353       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1020 13:19:38.252413       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1020 13:19:38.252425       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1020 13:19:38.264107       1 aggregator.go:166] initial CRD sync complete...
	I1020 13:19:38.264198       1 autoregister_controller.go:141] Starting autoregister controller
	I1020 13:19:38.264232       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:19:38.264269       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:19:38.277888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 13:19:38.287471       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1020 13:19:38.526441       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:19:38.932208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:19:41.126884       1 controller.go:624] quota admission added evaluator for: namespaces
	I1020 13:19:41.182597       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1020 13:19:41.209463       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:19:41.221917       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:19:41.231796       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1020 13:19:41.280570       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.93.1"}
	I1020 13:19:41.306246       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.176.91"}
	I1020 13:19:51.395682       1 controller.go:624] quota admission added evaluator for: endpoints
	I1020 13:19:51.472551       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1020 13:19:51.572899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [52a161091b397fb4b3f3af4bafa726d8d45a44d17cf647d542ccdab6bd1b0daf] <==
	I1020 13:19:51.518895       1 shared_informer.go:318] Caches are synced for attach detach
	I1020 13:19:51.547015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.100521ms"
	I1020 13:19:51.558357       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 13:19:51.559184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.917827ms"
	I1020 13:19:51.569952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.717955ms"
	I1020 13:19:51.578044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.880042ms"
	I1020 13:19:51.582049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="94.286µs"
	I1020 13:19:51.582122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.537µs"
	I1020 13:19:51.584835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.351µs"
	I1020 13:19:51.591225       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1020 13:19:51.591327       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1020 13:19:51.602902       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 13:19:51.608653       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.809µs"
	I1020 13:19:51.918844       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 13:19:51.918873       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1020 13:19:51.958572       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 13:19:56.506402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.674µs"
	I1020 13:19:57.515811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.943µs"
	I1020 13:19:58.550588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="2.00453ms"
	I1020 13:20:00.566045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.916965ms"
	I1020 13:20:00.566173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.068µs"
	I1020 13:20:09.614646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.077896ms"
	I1020 13:20:09.614776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.07µs"
	I1020 13:20:12.566997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.549µs"
	I1020 13:20:21.830407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.906µs"
	
	
	==> kube-proxy [00f6d21817a9ad327ca249fe819bdc41de35f885521a5955856f753a86b2b56c] <==
	I1020 13:19:39.392206       1 server_others.go:69] "Using iptables proxy"
	I1020 13:19:39.610800       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1020 13:19:39.969311       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:19:39.971187       1 server_others.go:152] "Using iptables Proxier"
	I1020 13:19:39.971286       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1020 13:19:39.971319       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1020 13:19:39.971367       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1020 13:19:39.971612       1 server.go:846] "Version info" version="v1.28.0"
	I1020 13:19:39.971799       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:19:39.972521       1 config.go:188] "Starting service config controller"
	I1020 13:19:40.007447       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1020 13:19:39.981196       1 config.go:97] "Starting endpoint slice config controller"
	I1020 13:19:40.007574       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1020 13:19:40.016032       1 config.go:315] "Starting node config controller"
	I1020 13:19:40.016067       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1020 13:19:40.123138       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1020 13:19:40.123191       1 shared_informer.go:318] Caches are synced for service config
	I1020 13:19:40.123295       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e8b5d4c1732bb4b651f2ce3c3e2b44ffd50e135a1921aa5edf3ec4e3acb343a4] <==
	I1020 13:19:36.370323       1 serving.go:348] Generated self-signed cert in-memory
	I1020 13:19:38.438048       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1020 13:19:38.438165       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:19:38.494995       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1020 13:19:38.495237       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1020 13:19:38.495306       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1020 13:19:38.495393       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1020 13:19:38.515061       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:19:38.520649       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 13:19:38.520795       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:19:38.520832       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1020 13:19:38.622837       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1020 13:19:38.622907       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 13:19:38.695503       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: I1020 13:19:51.624726     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7m2q\" (UniqueName: \"kubernetes.io/projected/51f9001f-f124-4b71-9a9f-d614033d9c3c-kube-api-access-z7m2q\") pod \"dashboard-metrics-scraper-5f989dc9cf-dxgsn\" (UID: \"51f9001f-f124-4b71-9a9f-d614033d9c3c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn"
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: I1020 13:19:51.624863     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc3251fa-505c-47ad-94ec-14b28587285f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-72xxb\" (UID: \"cc3251fa-505c-47ad-94ec-14b28587285f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72xxb"
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: I1020 13:19:51.625000     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/51f9001f-f124-4b71-9a9f-d614033d9c3c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-dxgsn\" (UID: \"51f9001f-f124-4b71-9a9f-d614033d9c3c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn"
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: W1020 13:19:51.843717     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/crio-030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4 WatchSource:0}: Error finding container 030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4: Status 404 returned error can't find the container with id 030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: W1020 13:19:51.862288     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/crio-6de05bc792eb3bbb23e4bac1fe6a37a2e54a8aa7a7025bf111f9c08d66cca7c3 WatchSource:0}: Error finding container 6de05bc792eb3bbb23e4bac1fe6a37a2e54a8aa7a7025bf111f9c08d66cca7c3: Status 404 returned error can't find the container with id 6de05bc792eb3bbb23e4bac1fe6a37a2e54a8aa7a7025bf111f9c08d66cca7c3
	Oct 20 13:19:56 old-k8s-version-995203 kubelet[778]: I1020 13:19:56.492907     778 scope.go:117] "RemoveContainer" containerID="757f99b5a75835754368184ea6adc054fceec01fcb91679f5b6b2044b8e2a355"
	Oct 20 13:19:57 old-k8s-version-995203 kubelet[778]: I1020 13:19:57.497371     778 scope.go:117] "RemoveContainer" containerID="757f99b5a75835754368184ea6adc054fceec01fcb91679f5b6b2044b8e2a355"
	Oct 20 13:19:57 old-k8s-version-995203 kubelet[778]: I1020 13:19:57.497727     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:19:57 old-k8s-version-995203 kubelet[778]: E1020 13:19:57.497998     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:19:58 old-k8s-version-995203 kubelet[778]: I1020 13:19:58.499569     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:19:58 old-k8s-version-995203 kubelet[778]: E1020 13:19:58.499856     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:01 old-k8s-version-995203 kubelet[778]: I1020 13:20:01.816260     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:20:01 old-k8s-version-995203 kubelet[778]: E1020 13:20:01.816656     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:09 old-k8s-version-995203 kubelet[778]: I1020 13:20:09.538448     778 scope.go:117] "RemoveContainer" containerID="08498359d61f644f4b52ac712ce52b9a566408a317365cb6867d4ae77be3b7a1"
	Oct 20 13:20:09 old-k8s-version-995203 kubelet[778]: I1020 13:20:09.557089     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72xxb" podStartSLOduration=10.598473047 podCreationTimestamp="2025-10-20 13:19:51 +0000 UTC" firstStartedPulling="2025-10-20 13:19:51.865735317 +0000 UTC m=+19.763462215" lastFinishedPulling="2025-10-20 13:19:59.824292149 +0000 UTC m=+27.722019047" observedRunningTime="2025-10-20 13:20:00.534832018 +0000 UTC m=+28.432558924" watchObservedRunningTime="2025-10-20 13:20:09.557029879 +0000 UTC m=+37.454756776"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: I1020 13:20:12.365467     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: I1020 13:20:12.549501     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: I1020 13:20:12.549895     778 scope.go:117] "RemoveContainer" containerID="e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: E1020 13:20:12.550283     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:21 old-k8s-version-995203 kubelet[778]: I1020 13:20:21.816616     778 scope.go:117] "RemoveContainer" containerID="e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f"
	Oct 20 13:20:21 old-k8s-version-995203 kubelet[778]: E1020 13:20:21.816924     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:23 old-k8s-version-995203 kubelet[778]: I1020 13:20:23.595298     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 20 13:20:23 old-k8s-version-995203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:20:23 old-k8s-version-995203 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:20:23 old-k8s-version-995203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d70e7a2fe364404b1bf3b0bb1c7eff9af141659e4d5c48c62445a77a433eec2e] <==
	2025/10/20 13:19:59 Using namespace: kubernetes-dashboard
	2025/10/20 13:19:59 Using in-cluster config to connect to apiserver
	2025/10/20 13:19:59 Using secret token for csrf signing
	2025/10/20 13:19:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:19:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:19:59 Successful initial request to the apiserver, version: v1.28.0
	2025/10/20 13:19:59 Generating JWE encryption key
	2025/10/20 13:19:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:19:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:20:00 Initializing JWE encryption key from synchronized object
	2025/10/20 13:20:00 Creating in-cluster Sidecar client
	2025/10/20 13:20:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:20:00 Serving insecurely on HTTP port: 9090
	2025/10/20 13:19:59 Starting overwatch
	
	
	==> storage-provisioner [08498359d61f644f4b52ac712ce52b9a566408a317365cb6867d4ae77be3b7a1] <==
	I1020 13:19:39.269842       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:20:09.283288       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9a894360ae536522ed7f07cec598ce802c9de323861ffe9e78dc6dc8622ad05] <==
	I1020 13:20:09.588861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:20:09.626054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:20:09.626208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-995203 -n old-k8s-version-995203
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-995203 -n old-k8s-version-995203: exit status 2 (366.57697ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-995203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-995203
helpers_test.go:243: (dbg) docker inspect old-k8s-version-995203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743",
	        "Created": "2025-10-20T13:17:39.717282575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478478,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:19:25.013777837Z",
	            "FinishedAt": "2025-10-20T13:19:24.155964442Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/hosts",
	        "LogPath": "/var/lib/docker/containers/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743-json.log",
	        "Name": "/old-k8s-version-995203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-995203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-995203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743",
	                "LowerDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd1bd29aa53f886b0c54970ed8f67c32c398fcd644208603abfea6b0f068c02b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-995203",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-995203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-995203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-995203",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-995203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da803722a71115b58020b6663177b03eae17c4142671af0a9dca7d72fb2c1dad",
	            "SandboxKey": "/var/run/docker/netns/da803722a711",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-995203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:18:25:df:e2:01",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4e48fc4c3ab83a2a7d44a282549d8182e6b6d0f2aee11543e9c45f4ee745a84b",
	                    "EndpointID": "867d45fc36b44321d83f981dbc56271ba3a8d99277fa22247031f910d7716b53",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-995203",
	                        "bc62e325c2a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203: exit status 2 (333.846941ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-995203 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-995203 logs -n 25: (1.274606933s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-308474 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo containerd config dump                                                                                                                                                                                                  │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ ssh     │ -p cilium-308474 sudo crio config                                                                                                                                                                                                             │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │                     │
	│ delete  │ -p cilium-308474                                                                                                                                                                                                                              │ cilium-308474             │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p force-systemd-env-534257 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-534257  │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ delete  │ -p force-systemd-env-534257                                                                                                                                                                                                                   │ force-systemd-env-534257  │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-066011    │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ delete  │ -p kubernetes-upgrade-314577                                                                                                                                                                                                                  │ kubernetes-upgrade-314577 │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p cert-options-123220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ cert-options-123220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ -p cert-options-123220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ delete  │ -p cert-options-123220                                                                                                                                                                                                                        │ cert-options-123220       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ stop    │ -p old-k8s-version-995203 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-995203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011    │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203    │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:19:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:19:30.440336  479219 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:19:30.440602  479219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:19:30.440607  479219 out.go:374] Setting ErrFile to fd 2...
	I1020 13:19:30.440610  479219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:19:30.440888  479219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:19:30.441271  479219 out.go:368] Setting JSON to false
	I1020 13:19:30.442258  479219 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10921,"bootTime":1760955450,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:19:30.442317  479219 start.go:141] virtualization:  
	I1020 13:19:30.446075  479219 out.go:179] * [cert-expiration-066011] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:19:30.449118  479219 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:19:30.449227  479219 notify.go:220] Checking for updates...
	I1020 13:19:30.455091  479219 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:19:30.458235  479219 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:19:30.461321  479219 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:19:30.464490  479219 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:19:30.467453  479219 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:19:30.470899  479219 config.go:182] Loaded profile config "cert-expiration-066011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:19:30.471533  479219 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:19:30.520507  479219 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:19:30.520613  479219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:19:30.608629  479219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-20 13:19:30.597958897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:19:30.608716  479219 docker.go:318] overlay module found
	I1020 13:19:30.611652  479219 out.go:179] * Using the docker driver based on existing profile
	I1020 13:19:30.614450  479219 start.go:305] selected driver: docker
	I1020 13:19:30.614460  479219 start.go:925] validating driver "docker" against &{Name:cert-expiration-066011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-066011 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:19:30.614560  479219 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:19:30.615247  479219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:19:30.703060  479219 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-20 13:19:30.689635864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:19:30.703355  479219 cni.go:84] Creating CNI manager for ""
	I1020 13:19:30.703416  479219 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:19:30.703450  479219 start.go:349] cluster config:
	{Name:cert-expiration-066011 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-066011 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1020 13:19:30.706643  479219 out.go:179] * Starting "cert-expiration-066011" primary control-plane node in "cert-expiration-066011" cluster
	I1020 13:19:30.709630  479219 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:19:30.712528  479219 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:19:30.715420  479219 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:19:30.715467  479219 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:19:30.715478  479219 cache.go:58] Caching tarball of preloaded images
	I1020 13:19:30.715590  479219 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:19:30.715598  479219 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:19:30.715706  479219 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/cert-expiration-066011/config.json ...
	I1020 13:19:30.715943  479219 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:19:30.760569  479219 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:19:30.760581  479219 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:19:30.760595  479219 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:19:30.760622  479219 start.go:360] acquireMachinesLock for cert-expiration-066011: {Name:mkfa484931163ca74c18f33f3fb3d9634523330e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:19:30.760675  479219 start.go:364] duration metric: took 37.145µs to acquireMachinesLock for "cert-expiration-066011"
	I1020 13:19:30.760692  479219 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:19:30.760697  479219 fix.go:54] fixHost starting: 
	I1020 13:19:30.760962  479219 cli_runner.go:164] Run: docker container inspect cert-expiration-066011 --format={{.State.Status}}
	I1020 13:19:30.791563  479219 fix.go:112] recreateIfNeeded on cert-expiration-066011: state=Running err=<nil>
	W1020 13:19:30.791589  479219 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 13:19:29.732731  478350 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:19:29.736487  478350 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:19:29.736518  478350 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:19:29.736531  478350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:19:29.736587  478350 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:19:29.736677  478350 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:19:29.736805  478350 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:19:29.744568  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:19:29.764222  478350 start.go:296] duration metric: took 162.567461ms for postStartSetup
	I1020 13:19:29.764325  478350 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:19:29.764436  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:29.782860  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:29.885706  478350 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:19:29.890381  478350 fix.go:56] duration metric: took 4.948170086s for fixHost
	I1020 13:19:29.890404  478350 start.go:83] releasing machines lock for "old-k8s-version-995203", held for 4.94822251s
	I1020 13:19:29.890472  478350 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-995203
	I1020 13:19:29.909473  478350 ssh_runner.go:195] Run: cat /version.json
	I1020 13:19:29.909522  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:29.909569  478350 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:19:29.909622  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:29.936196  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:29.946230  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:30.131952  478350 ssh_runner.go:195] Run: systemctl --version
	I1020 13:19:30.139453  478350 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:19:30.180463  478350 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:19:30.186266  478350 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:19:30.186347  478350 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:19:30.194854  478350 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:19:30.194931  478350 start.go:495] detecting cgroup driver to use...
	I1020 13:19:30.194982  478350 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:19:30.195062  478350 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:19:30.211678  478350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:19:30.225325  478350 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:19:30.225416  478350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:19:30.241858  478350 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:19:30.255443  478350 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:19:30.381273  478350 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:19:30.569731  478350 docker.go:234] disabling docker service ...
	I1020 13:19:30.569801  478350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:19:30.591130  478350 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:19:30.608640  478350 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:19:30.784194  478350 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:19:30.975684  478350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:19:30.989229  478350 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:19:31.013948  478350 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1020 13:19:31.014025  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.024316  478350 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:19:31.024464  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.035823  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.046586  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.061230  478350 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:19:31.071521  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.089642  478350 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.106599  478350 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:31.119087  478350 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:19:31.129379  478350 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:19:31.138537  478350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:31.316419  478350 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:19:31.486985  478350 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:19:31.487057  478350 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:19:31.494591  478350 start.go:563] Will wait 60s for crictl version
	I1020 13:19:31.494656  478350 ssh_runner.go:195] Run: which crictl
	I1020 13:19:31.499689  478350 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:19:31.547438  478350 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:19:31.547618  478350 ssh_runner.go:195] Run: crio --version
	I1020 13:19:31.582864  478350 ssh_runner.go:195] Run: crio --version
	I1020 13:19:31.619157  478350 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1020 13:19:31.621976  478350 cli_runner.go:164] Run: docker network inspect old-k8s-version-995203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:19:31.644246  478350 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:19:31.648404  478350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:19:31.662286  478350 kubeadm.go:883] updating cluster {Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:19:31.662402  478350 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 13:19:31.662452  478350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:19:31.715145  478350 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:19:31.715221  478350 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:19:31.715312  478350 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:19:31.745096  478350 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:19:31.745115  478350 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:19:31.745124  478350 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1020 13:19:31.745225  478350 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-995203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:19:31.745304  478350 ssh_runner.go:195] Run: crio config
	I1020 13:19:31.834831  478350 cni.go:84] Creating CNI manager for ""
	I1020 13:19:31.834855  478350 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:19:31.834874  478350 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:19:31.834899  478350 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-995203 NodeName:old-k8s-version-995203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:19:31.835036  478350 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-995203"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:19:31.835112  478350 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1020 13:19:31.843591  478350 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:19:31.843664  478350 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:19:31.852106  478350 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1020 13:19:31.870811  478350 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:19:31.885053  478350 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1020 13:19:31.900206  478350 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:19:31.910294  478350 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:19:31.921032  478350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:32.074914  478350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:19:32.092389  478350 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203 for IP: 192.168.76.2
	I1020 13:19:32.092408  478350 certs.go:195] generating shared ca certs ...
	I1020 13:19:32.092424  478350 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:32.092579  478350 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:19:32.092620  478350 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:19:32.092627  478350 certs.go:257] generating profile certs ...
	I1020 13:19:32.092712  478350 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.key
	I1020 13:19:32.092773  478350 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key.8c7cc26d
	I1020 13:19:32.092816  478350 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key
	I1020 13:19:32.092929  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:19:32.092964  478350 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:19:32.092972  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:19:32.092996  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:19:32.093019  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:19:32.093041  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:19:32.093082  478350 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:19:32.093671  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:19:32.154106  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:19:32.220337  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:19:32.281947  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:19:32.341393  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1020 13:19:32.379078  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:19:32.404683  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:19:32.429177  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 13:19:32.454524  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:19:32.480776  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:19:32.500691  478350 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:19:32.528567  478350 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:19:32.544151  478350 ssh_runner.go:195] Run: openssl version
	I1020 13:19:32.554777  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:19:32.564217  478350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:19:32.569116  478350 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:19:32.569194  478350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:19:32.618381  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:19:32.628456  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:19:32.638772  478350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:19:32.643746  478350 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:19:32.643817  478350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:19:32.707652  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:19:32.722180  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:19:32.737927  478350 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:19:32.743850  478350 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:19:32.743944  478350 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:19:32.821670  478350 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:19:32.841039  478350 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:19:32.853874  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:19:32.952046  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:19:33.035458  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:19:33.109165  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:19:33.207876  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:19:33.279421  478350 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:19:33.337085  478350 kubeadm.go:400] StartCluster: {Name:old-k8s-version-995203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-995203 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:19:33.337232  478350 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:19:33.337328  478350 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:19:33.399333  478350 cri.go:89] found id: "52a161091b397fb4b3f3af4bafa726d8d45a44d17cf647d542ccdab6bd1b0daf"
	I1020 13:19:33.399408  478350 cri.go:89] found id: "e8b5d4c1732bb4b651f2ce3c3e2b44ffd50e135a1921aa5edf3ec4e3acb343a4"
	I1020 13:19:33.399443  478350 cri.go:89] found id: "5392efc3a2e72c11b9ba1d3e8474612440b8297d93e158efe009f84187741706"
	I1020 13:19:33.399476  478350 cri.go:89] found id: "7cdb1584428d91650e965692588e4339e2de21c156b9c94681fc8108ca04cfc3"
	I1020 13:19:33.399526  478350 cri.go:89] found id: ""
	I1020 13:19:33.399623  478350 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:19:33.432928  478350 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:19:33Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:19:33.433080  478350 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:19:33.452621  478350 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:19:33.452692  478350 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:19:33.452776  478350 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:19:33.466856  478350 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:19:33.467575  478350 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-995203" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:19:33.467936  478350 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-995203" cluster setting kubeconfig missing "old-k8s-version-995203" context setting]
	I1020 13:19:33.468537  478350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:33.470344  478350 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:19:33.487281  478350 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1020 13:19:33.487363  478350 kubeadm.go:601] duration metric: took 34.652232ms to restartPrimaryControlPlane
	I1020 13:19:33.487386  478350 kubeadm.go:402] duration metric: took 150.310789ms to StartCluster
	I1020 13:19:33.487431  478350 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:33.487536  478350 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:19:33.488605  478350 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:19:33.488902  478350 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:19:33.489510  478350 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:19:33.489607  478350 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-995203"
	I1020 13:19:33.489623  478350 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-995203"
	W1020 13:19:33.489629  478350 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:19:33.489651  478350 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:19:33.490161  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.490322  478350 config.go:182] Loaded profile config "old-k8s-version-995203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1020 13:19:33.490408  478350 addons.go:69] Setting dashboard=true in profile "old-k8s-version-995203"
	I1020 13:19:33.490436  478350 addons.go:238] Setting addon dashboard=true in "old-k8s-version-995203"
	W1020 13:19:33.490458  478350 addons.go:247] addon dashboard should already be in state true
	I1020 13:19:33.490506  478350 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:19:33.490990  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.491368  478350 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-995203"
	I1020 13:19:33.491385  478350 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-995203"
	I1020 13:19:33.491655  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.495752  478350 out.go:179] * Verifying Kubernetes components...
	I1020 13:19:33.498650  478350 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:33.556213  478350 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:19:33.556213  478350 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:19:33.559903  478350 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:19:33.559923  478350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:19:33.559993  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:33.563236  478350 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 13:19:33.564943  478350 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-995203"
	W1020 13:19:33.564964  478350 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:19:33.564988  478350 host.go:66] Checking if "old-k8s-version-995203" exists ...
	I1020 13:19:33.565389  478350 cli_runner.go:164] Run: docker container inspect old-k8s-version-995203 --format={{.State.Status}}
	I1020 13:19:33.572432  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:19:33.572459  478350 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:19:33.572543  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:33.622883  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:33.627883  478350 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:19:33.627911  478350 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:19:33.627985  478350 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-995203
	I1020 13:19:33.650648  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:33.661488  478350 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/old-k8s-version-995203/id_rsa Username:docker}
	I1020 13:19:33.856280  478350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:19:33.866833  478350 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:19:33.896568  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:19:33.896644  478350 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:19:33.911482  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:19:33.911563  478350 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:19:33.912281  478350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:19:33.938931  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:19:33.939008  478350 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:19:34.007472  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:19:34.007567  478350 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:19:34.073024  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:19:34.073108  478350 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:19:34.136679  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:19:34.136759  478350 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:19:34.184583  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:19:34.184661  478350 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:19:34.198421  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:19:34.198499  478350 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:19:34.214806  478350 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:19:34.214890  478350 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:19:34.230014  478350 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:19:30.794800  479219 out.go:252] * Updating the running docker "cert-expiration-066011" container ...
	I1020 13:19:30.794853  479219 machine.go:93] provisionDockerMachine start ...
	I1020 13:19:30.795012  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:30.828127  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:30.828468  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:30.828475  479219 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:19:31.004086  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-066011
	
	I1020 13:19:31.004108  479219 ubuntu.go:182] provisioning hostname "cert-expiration-066011"
	I1020 13:19:31.004193  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:31.030220  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:31.030527  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:31.030536  479219 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-066011 && echo "cert-expiration-066011" | sudo tee /etc/hostname
	I1020 13:19:31.204477  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-066011
	
	I1020 13:19:31.204559  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:31.231734  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:31.232040  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:31.232054  479219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-066011' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-066011/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-066011' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:19:31.396969  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:19:31.396985  479219 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:19:31.397011  479219 ubuntu.go:190] setting up certificates
	I1020 13:19:31.397029  479219 provision.go:84] configureAuth start
	I1020 13:19:31.397116  479219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-066011
	I1020 13:19:31.422167  479219 provision.go:143] copyHostCerts
	I1020 13:19:31.422228  479219 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:19:31.422244  479219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:19:31.422322  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:19:31.422443  479219 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:19:31.422448  479219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:19:31.422475  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:19:31.422534  479219 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:19:31.422537  479219 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:19:31.422559  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:19:31.422650  479219 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-066011 san=[127.0.0.1 192.168.85.2 cert-expiration-066011 localhost minikube]
	I1020 13:19:32.375177  479219 provision.go:177] copyRemoteCerts
	I1020 13:19:32.375231  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:19:32.375268  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:32.398842  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:32.519910  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 13:19:32.549708  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:19:32.576982  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1020 13:19:32.601976  479219 provision.go:87] duration metric: took 1.204922848s to configureAuth
	I1020 13:19:32.601994  479219 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:19:32.602177  479219 config.go:182] Loaded profile config "cert-expiration-066011": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:19:32.602284  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:32.628254  479219 main.go:141] libmachine: Using SSH client type: native
	I1020 13:19:32.628603  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1020 13:19:32.628622  479219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:19:38.119753  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:19:38.119765  479219 machine.go:96] duration metric: took 7.324905293s to provisionDockerMachine
	I1020 13:19:38.119774  479219 start.go:293] postStartSetup for "cert-expiration-066011" (driver="docker")
	I1020 13:19:38.119784  479219 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:19:38.119845  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:19:38.119938  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.148504  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.273668  479219 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:19:38.278462  479219 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:19:38.278480  479219 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:19:38.278490  479219 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:19:38.278549  479219 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:19:38.278627  479219 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:19:38.278730  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:19:38.292548  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:19:38.325479  479219 start.go:296] duration metric: took 205.689549ms for postStartSetup
	I1020 13:19:38.325552  479219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:19:38.325607  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.352889  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.492290  479219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:19:38.501014  479219 fix.go:56] duration metric: took 7.740308614s for fixHost
	I1020 13:19:38.501029  479219 start.go:83] releasing machines lock for "cert-expiration-066011", held for 7.740346842s
	I1020 13:19:38.501113  479219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-066011
	I1020 13:19:38.535406  479219 ssh_runner.go:195] Run: cat /version.json
	I1020 13:19:38.535434  479219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:19:38.535454  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.535536  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-066011
	I1020 13:19:38.576533  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.578850  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/cert-expiration-066011/id_rsa Username:docker}
	I1020 13:19:38.712705  479219 ssh_runner.go:195] Run: systemctl --version
	I1020 13:19:38.838421  479219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:19:38.937759  479219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:19:38.949002  479219 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:19:38.949072  479219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:19:38.960859  479219 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:19:38.960888  479219 start.go:495] detecting cgroup driver to use...
	I1020 13:19:38.960918  479219 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:19:38.960985  479219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:19:38.982947  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:19:39.003288  479219 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:19:39.003345  479219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:19:39.035633  479219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:19:39.060033  479219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:19:39.342970  479219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:19:39.587558  479219 docker.go:234] disabling docker service ...
	I1020 13:19:39.587629  479219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:19:39.605834  479219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:19:39.622264  479219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:19:39.862955  479219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:19:40.145915  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:19:40.168690  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:19:40.198551  479219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:19:40.198642  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.215442  479219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:19:40.215550  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.231596  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.244972  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.261538  479219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:19:40.279190  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.291132  479219 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.302192  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:19:40.320926  479219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:19:40.331574  479219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:19:40.340778  479219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:19:40.839440  478350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.983076962s)
	I1020 13:19:40.839510  478350 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.9726072s)
	I1020 13:19:40.839540  478350 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-995203" to be "Ready" ...
	I1020 13:19:40.839871  478350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.92753474s)
	I1020 13:19:40.867723  478350 node_ready.go:49] node "old-k8s-version-995203" is "Ready"
	I1020 13:19:40.867758  478350 node_ready.go:38] duration metric: took 28.194435ms for node "old-k8s-version-995203" to be "Ready" ...
	I1020 13:19:40.867777  478350 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:19:40.867840  478350 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:19:41.314399  478350 api_server.go:72] duration metric: took 7.825430877s to wait for apiserver process to appear ...
	I1020 13:19:41.314426  478350 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:19:41.314457  478350 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:19:41.315355  478350 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.08523823s)
	I1020 13:19:41.318413  478350 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-995203 addons enable metrics-server
	
	I1020 13:19:41.321426  478350 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1020 13:19:41.324440  478350 addons.go:514] duration metric: took 7.834918563s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1020 13:19:41.326144  478350 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:19:41.327843  478350 api_server.go:141] control plane version: v1.28.0
	I1020 13:19:41.327873  478350 api_server.go:131] duration metric: took 13.439269ms to wait for apiserver health ...
	I1020 13:19:41.327883  478350 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:19:41.332119  478350 system_pods.go:59] 8 kube-system pods found
	I1020 13:19:41.332151  478350 system_pods.go:61] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:19:41.332182  478350 system_pods.go:61] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:19:41.332196  478350 system_pods.go:61] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:19:41.332204  478350 system_pods.go:61] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:19:41.332223  478350 system_pods.go:61] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:19:41.332232  478350 system_pods.go:61] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:19:41.332239  478350 system_pods.go:61] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:19:41.332247  478350 system_pods.go:61] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Running
	I1020 13:19:41.332253  478350 system_pods.go:74] duration metric: took 4.363756ms to wait for pod list to return data ...
	I1020 13:19:41.332264  478350 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:19:41.334632  478350 default_sa.go:45] found service account: "default"
	I1020 13:19:41.334656  478350 default_sa.go:55] duration metric: took 2.385991ms for default service account to be created ...
	I1020 13:19:41.334666  478350 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:19:41.338793  478350 system_pods.go:86] 8 kube-system pods found
	I1020 13:19:41.338842  478350 system_pods.go:89] "coredns-5dd5756b68-vqvss" [b7ec10ed-30b5-4af7-ba79-bf7e9a899603] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:19:41.338852  478350 system_pods.go:89] "etcd-old-k8s-version-995203" [393b9e99-a232-4c4d-b674-909b51ed2b6c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:19:41.338859  478350 system_pods.go:89] "kindnet-5x5fk" [023a6ce0-c4bb-424e-8283-7fb169e3ead2] Running
	I1020 13:19:41.338867  478350 system_pods.go:89] "kube-apiserver-old-k8s-version-995203" [74b36dbe-3a4c-42af-9c5f-f5f90698ea78] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:19:41.338874  478350 system_pods.go:89] "kube-controller-manager-old-k8s-version-995203" [19082ca9-1bbe-4a25-8acb-2eefe2aad116] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:19:41.338884  478350 system_pods.go:89] "kube-proxy-n8zpg" [28a1c992-7dd6-492b-b991-579f78661803] Running
	I1020 13:19:41.338891  478350 system_pods.go:89] "kube-scheduler-old-k8s-version-995203" [3459b6e5-2587-4476-82da-43191db5440f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:19:41.338903  478350 system_pods.go:89] "storage-provisioner" [386e5757-2c12-4037-806b-3451ff6562e2] Running
	I1020 13:19:41.338911  478350 system_pods.go:126] duration metric: took 4.223184ms to wait for k8s-apps to be running ...
	I1020 13:19:41.338929  478350 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:19:41.338989  478350 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:19:41.354668  478350 system_svc.go:56] duration metric: took 15.729062ms WaitForService to wait for kubelet
	I1020 13:19:41.354743  478350 kubeadm.go:586] duration metric: took 7.865779018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:19:41.354777  478350 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:19:41.357851  478350 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:19:41.357883  478350 node_conditions.go:123] node cpu capacity is 2
	I1020 13:19:41.357896  478350 node_conditions.go:105] duration metric: took 3.096614ms to run NodePressure ...
	I1020 13:19:41.357929  478350 start.go:241] waiting for startup goroutines ...
	I1020 13:19:41.357944  478350 start.go:246] waiting for cluster config update ...
	I1020 13:19:41.357967  478350 start.go:255] writing updated cluster config ...
	I1020 13:19:41.358262  478350 ssh_runner.go:195] Run: rm -f paused
	I1020 13:19:41.362174  478350 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:19:41.366817  478350 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vqvss" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 13:19:43.373354  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	I1020 13:19:40.620889  479219 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1020 13:19:45.376313  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:47.872251  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:49.872881  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:52.374288  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:54.873175  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:56.873494  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:19:58.878880  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:01.373258  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:03.873000  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:06.372567  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	W1020 13:20:08.372969  478350 pod_ready.go:104] pod "coredns-5dd5756b68-vqvss" is not "Ready", error: <nil>
	I1020 13:20:09.873097  478350 pod_ready.go:94] pod "coredns-5dd5756b68-vqvss" is "Ready"
	I1020 13:20:09.873126  478350 pod_ready.go:86] duration metric: took 28.506281809s for pod "coredns-5dd5756b68-vqvss" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.876419  478350 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.881362  478350 pod_ready.go:94] pod "etcd-old-k8s-version-995203" is "Ready"
	I1020 13:20:09.881390  478350 pod_ready.go:86] duration metric: took 4.943324ms for pod "etcd-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.885177  478350 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.890138  478350 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-995203" is "Ready"
	I1020 13:20:09.890168  478350 pod_ready.go:86] duration metric: took 4.963846ms for pod "kube-apiserver-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:09.893243  478350 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.071155  478350 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-995203" is "Ready"
	I1020 13:20:10.071185  478350 pod_ready.go:86] duration metric: took 177.905641ms for pod "kube-controller-manager-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.271933  478350 pod_ready.go:83] waiting for pod "kube-proxy-n8zpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.670936  478350 pod_ready.go:94] pod "kube-proxy-n8zpg" is "Ready"
	I1020 13:20:10.670963  478350 pod_ready.go:86] duration metric: took 399.008521ms for pod "kube-proxy-n8zpg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:10.871824  478350 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:11.270878  478350 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-995203" is "Ready"
	I1020 13:20:11.270905  478350 pod_ready.go:86] duration metric: took 399.058179ms for pod "kube-scheduler-old-k8s-version-995203" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:20:11.270917  478350 pod_ready.go:40] duration metric: took 29.908707887s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:20:11.334256  478350 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1020 13:20:11.335704  478350 out.go:203] 
	W1020 13:20:11.336913  478350 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1020 13:20:11.338044  478350 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1020 13:20:11.339164  478350 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-995203" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.369300914Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.37700704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.378542395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.393897882Z" level=info msg="Created container e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn/dashboard-metrics-scraper" id=49ac2893-1d51-4e71-a536-e10a4f48ddf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.395172302Z" level=info msg="Starting container: e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f" id=a6eb685a-5df0-47ed-95a9-3b98b8dd3eb2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.397422374Z" level=info msg="Started container" PID=1664 containerID=e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn/dashboard-metrics-scraper id=a6eb685a-5df0-47ed-95a9-3b98b8dd3eb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4
	Oct 20 13:20:12 old-k8s-version-995203 conmon[1662]: conmon e95dfbb74add44c7037f <ninfo>: container 1664 exited with status 1
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.551527752Z" level=info msg="Removing container: 9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645" id=4381a243-c9a0-4ac7-8f21-ec55e0d32fb0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.562226802Z" level=info msg="Error loading conmon cgroup of container 9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645: cgroup deleted" id=4381a243-c9a0-4ac7-8f21-ec55e0d32fb0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:20:12 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:12.56552382Z" level=info msg="Removed container 9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn/dashboard-metrics-scraper" id=4381a243-c9a0-4ac7-8f21-ec55e0d32fb0 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.220969717Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.225819978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.22585682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.225879729Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.229110154Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.229147143Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.229169683Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.23243082Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.232463682Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.232483785Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.235621835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.235655206Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.235677294Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.238892967Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:20:19 old-k8s-version-995203 crio[653]: time="2025-10-20T13:20:19.238928192Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e95dfbb74add4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   030ebf84f8616       dashboard-metrics-scraper-5f989dc9cf-dxgsn       kubernetes-dashboard
	b9a894360ae53       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago      Running             storage-provisioner         2                   fe081971236e6       storage-provisioner                              kube-system
	d70e7a2fe3644       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago      Running             kubernetes-dashboard        0                   6de05bc792eb3       kubernetes-dashboard-8694d4445c-72xxb            kubernetes-dashboard
	c93749c765311       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   23f4b5b4e5eb4       busybox                                          default
	4f3250498e84a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           49 seconds ago      Running             coredns                     1                   8abbe1203b1d4       coredns-5dd5756b68-vqvss                         kube-system
	0cb2bbb6a3818       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   a819849845459       kindnet-5x5fk                                    kube-system
	08498359d61f6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   fe081971236e6       storage-provisioner                              kube-system
	00f6d21817a9a       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           49 seconds ago      Running             kube-proxy                  1                   1cd8feccebe11       kube-proxy-n8zpg                                 kube-system
	52a161091b397       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           55 seconds ago      Running             kube-controller-manager     1                   faf80dc3ea1af       kube-controller-manager-old-k8s-version-995203   kube-system
	e8b5d4c1732bb       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           55 seconds ago      Running             kube-scheduler              1                   3c883cc50496f       kube-scheduler-old-k8s-version-995203            kube-system
	5392efc3a2e72       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           55 seconds ago      Running             kube-apiserver              1                   4d95d5201ebdb       kube-apiserver-old-k8s-version-995203            kube-system
	7cdb1584428d9       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           55 seconds ago      Running             etcd                        1                   f5c27ab73c8bd       etcd-old-k8s-version-995203                      kube-system
	
	
	==> coredns [4f3250498e84ac75870e6c8ade992e57ff6eab7f59095c1680c1c603a64e29d2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54721 - 18206 "HINFO IN 3026438341382897215.8024713103325109988. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051909799s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-995203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-995203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=old-k8s-version-995203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_18_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:18:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-995203
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:20:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:17:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:20:08 +0000   Mon, 20 Oct 2025 13:18:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-995203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                20e6ba9e-7bcb-4309-8b46-32f70578149b
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-vqvss                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m9s
	  kube-system                 etcd-old-k8s-version-995203                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-5x5fk                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-old-k8s-version-995203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-old-k8s-version-995203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-n8zpg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-old-k8s-version-995203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dxgsn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-72xxb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m7s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m23s              kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s              kubelet          Node old-k8s-version-995203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s              kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m23s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m10s              node-controller  Node old-k8s-version-995203 event: Registered Node old-k8s-version-995203 in Controller
	  Normal  NodeReady                92s                kubelet          Node old-k8s-version-995203 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-995203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node old-k8s-version-995203 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node old-k8s-version-995203 event: Registered Node old-k8s-version-995203 in Controller
	
	
	==> dmesg <==
	[Oct20 12:51] overlayfs: idmapped layers are currently not supported
	[Oct20 12:56] overlayfs: idmapped layers are currently not supported
	[Oct20 12:57] overlayfs: idmapped layers are currently not supported
	[Oct20 12:58] overlayfs: idmapped layers are currently not supported
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7cdb1584428d91650e965692588e4339e2de21c156b9c94681fc8108ca04cfc3] <==
	{"level":"info","ts":"2025-10-20T13:19:33.340397Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T13:19:33.340477Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-20T13:19:33.340889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-20T13:19:33.341032Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-20T13:19:33.343648Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T13:19:33.343789Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-20T13:19:33.355953Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-20T13:19:33.364105Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:19:33.364231Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-20T13:19:33.380605Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-20T13:19:33.380658Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-20T13:19:34.3644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-20T13:19:34.364448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-20T13:19:34.364477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-20T13:19:34.36449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.364507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.364517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.364525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-20T13:19:34.372571Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-995203 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-20T13:19:34.37262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T13:19:34.373579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-20T13:19:34.373747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-20T13:19:34.374613Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-20T13:19:34.375091Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-20T13:19:34.375142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:20:28 up  3:02,  0 user,  load average: 1.47, 2.68, 2.45
	Linux old-k8s-version-995203 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0cb2bbb6a3818fc93bc3dcc1c1a14c46c017bb411392ca6add35c0107efebf19] <==
	I1020 13:19:39.019177       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:19:39.024814       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:19:39.024973       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:19:39.024987       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:19:39.024998       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:19:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:19:39.219296       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:19:39.219417       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:19:39.219453       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:19:39.220343       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:20:09.220187       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:20:09.220215       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:20:09.220327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 13:20:09.220487       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1020 13:20:10.719699       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:20:10.719731       1 metrics.go:72] Registering metrics
	I1020 13:20:10.719784       1 controller.go:711] "Syncing nftables rules"
	I1020 13:20:19.220643       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:20:19.220679       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5392efc3a2e72c11b9ba1d3e8474612440b8297d93e158efe009f84187741706] <==
	I1020 13:19:38.214921       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1020 13:19:38.245614       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1020 13:19:38.245983       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:19:38.252262       1 shared_informer.go:318] Caches are synced for configmaps
	I1020 13:19:38.252353       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1020 13:19:38.252413       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1020 13:19:38.252425       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1020 13:19:38.264107       1 aggregator.go:166] initial CRD sync complete...
	I1020 13:19:38.264198       1 autoregister_controller.go:141] Starting autoregister controller
	I1020 13:19:38.264232       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:19:38.264269       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:19:38.277888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 13:19:38.287471       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1020 13:19:38.526441       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:19:38.932208       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:19:41.126884       1 controller.go:624] quota admission added evaluator for: namespaces
	I1020 13:19:41.182597       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1020 13:19:41.209463       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:19:41.221917       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:19:41.231796       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1020 13:19:41.280570       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.93.1"}
	I1020 13:19:41.306246       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.176.91"}
	I1020 13:19:51.395682       1 controller.go:624] quota admission added evaluator for: endpoints
	I1020 13:19:51.472551       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1020 13:19:51.572899       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [52a161091b397fb4b3f3af4bafa726d8d45a44d17cf647d542ccdab6bd1b0daf] <==
	I1020 13:19:51.518895       1 shared_informer.go:318] Caches are synced for attach detach
	I1020 13:19:51.547015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.100521ms"
	I1020 13:19:51.558357       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 13:19:51.559184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.917827ms"
	I1020 13:19:51.569952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.717955ms"
	I1020 13:19:51.578044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="30.880042ms"
	I1020 13:19:51.582049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="94.286µs"
	I1020 13:19:51.582122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="35.537µs"
	I1020 13:19:51.584835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.351µs"
	I1020 13:19:51.591225       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1020 13:19:51.591327       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1020 13:19:51.602902       1 shared_informer.go:318] Caches are synced for resource quota
	I1020 13:19:51.608653       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.809µs"
	I1020 13:19:51.918844       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 13:19:51.918873       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1020 13:19:51.958572       1 shared_informer.go:318] Caches are synced for garbage collector
	I1020 13:19:56.506402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="40.674µs"
	I1020 13:19:57.515811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.943µs"
	I1020 13:19:58.550588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="2.00453ms"
	I1020 13:20:00.566045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="28.916965ms"
	I1020 13:20:00.566173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.068µs"
	I1020 13:20:09.614646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.077896ms"
	I1020 13:20:09.614776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.07µs"
	I1020 13:20:12.566997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.549µs"
	I1020 13:20:21.830407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.906µs"
	
	
	==> kube-proxy [00f6d21817a9ad327ca249fe819bdc41de35f885521a5955856f753a86b2b56c] <==
	I1020 13:19:39.392206       1 server_others.go:69] "Using iptables proxy"
	I1020 13:19:39.610800       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1020 13:19:39.969311       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:19:39.971187       1 server_others.go:152] "Using iptables Proxier"
	I1020 13:19:39.971286       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1020 13:19:39.971319       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1020 13:19:39.971367       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1020 13:19:39.971612       1 server.go:846] "Version info" version="v1.28.0"
	I1020 13:19:39.971799       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:19:39.972521       1 config.go:188] "Starting service config controller"
	I1020 13:19:40.007447       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1020 13:19:39.981196       1 config.go:97] "Starting endpoint slice config controller"
	I1020 13:19:40.007574       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1020 13:19:40.016032       1 config.go:315] "Starting node config controller"
	I1020 13:19:40.016067       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1020 13:19:40.123138       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1020 13:19:40.123191       1 shared_informer.go:318] Caches are synced for service config
	I1020 13:19:40.123295       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e8b5d4c1732bb4b651f2ce3c3e2b44ffd50e135a1921aa5edf3ec4e3acb343a4] <==
	I1020 13:19:36.370323       1 serving.go:348] Generated self-signed cert in-memory
	I1020 13:19:38.438048       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1020 13:19:38.438165       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:19:38.494995       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1020 13:19:38.495237       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1020 13:19:38.495306       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1020 13:19:38.495393       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1020 13:19:38.515061       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:19:38.520649       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 13:19:38.520795       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:19:38.520832       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1020 13:19:38.622837       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1020 13:19:38.622907       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1020 13:19:38.695503       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: I1020 13:19:51.624726     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7m2q\" (UniqueName: \"kubernetes.io/projected/51f9001f-f124-4b71-9a9f-d614033d9c3c-kube-api-access-z7m2q\") pod \"dashboard-metrics-scraper-5f989dc9cf-dxgsn\" (UID: \"51f9001f-f124-4b71-9a9f-d614033d9c3c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn"
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: I1020 13:19:51.624863     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc3251fa-505c-47ad-94ec-14b28587285f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-72xxb\" (UID: \"cc3251fa-505c-47ad-94ec-14b28587285f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72xxb"
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: I1020 13:19:51.625000     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/51f9001f-f124-4b71-9a9f-d614033d9c3c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-dxgsn\" (UID: \"51f9001f-f124-4b71-9a9f-d614033d9c3c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn"
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: W1020 13:19:51.843717     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/crio-030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4 WatchSource:0}: Error finding container 030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4: Status 404 returned error can't find the container with id 030ebf84f8616005cd2907ad672759a9263151bf02447b36a960545f7cd784f4
	Oct 20 13:19:51 old-k8s-version-995203 kubelet[778]: W1020 13:19:51.862288     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc62e325c2a5fc4d8835892fc5654b7de03cc58f527a34a39fc6ef0f604a2743/crio-6de05bc792eb3bbb23e4bac1fe6a37a2e54a8aa7a7025bf111f9c08d66cca7c3 WatchSource:0}: Error finding container 6de05bc792eb3bbb23e4bac1fe6a37a2e54a8aa7a7025bf111f9c08d66cca7c3: Status 404 returned error can't find the container with id 6de05bc792eb3bbb23e4bac1fe6a37a2e54a8aa7a7025bf111f9c08d66cca7c3
	Oct 20 13:19:56 old-k8s-version-995203 kubelet[778]: I1020 13:19:56.492907     778 scope.go:117] "RemoveContainer" containerID="757f99b5a75835754368184ea6adc054fceec01fcb91679f5b6b2044b8e2a355"
	Oct 20 13:19:57 old-k8s-version-995203 kubelet[778]: I1020 13:19:57.497371     778 scope.go:117] "RemoveContainer" containerID="757f99b5a75835754368184ea6adc054fceec01fcb91679f5b6b2044b8e2a355"
	Oct 20 13:19:57 old-k8s-version-995203 kubelet[778]: I1020 13:19:57.497727     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:19:57 old-k8s-version-995203 kubelet[778]: E1020 13:19:57.497998     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:19:58 old-k8s-version-995203 kubelet[778]: I1020 13:19:58.499569     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:19:58 old-k8s-version-995203 kubelet[778]: E1020 13:19:58.499856     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:01 old-k8s-version-995203 kubelet[778]: I1020 13:20:01.816260     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:20:01 old-k8s-version-995203 kubelet[778]: E1020 13:20:01.816656     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:09 old-k8s-version-995203 kubelet[778]: I1020 13:20:09.538448     778 scope.go:117] "RemoveContainer" containerID="08498359d61f644f4b52ac712ce52b9a566408a317365cb6867d4ae77be3b7a1"
	Oct 20 13:20:09 old-k8s-version-995203 kubelet[778]: I1020 13:20:09.557089     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-72xxb" podStartSLOduration=10.598473047 podCreationTimestamp="2025-10-20 13:19:51 +0000 UTC" firstStartedPulling="2025-10-20 13:19:51.865735317 +0000 UTC m=+19.763462215" lastFinishedPulling="2025-10-20 13:19:59.824292149 +0000 UTC m=+27.722019047" observedRunningTime="2025-10-20 13:20:00.534832018 +0000 UTC m=+28.432558924" watchObservedRunningTime="2025-10-20 13:20:09.557029879 +0000 UTC m=+37.454756776"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: I1020 13:20:12.365467     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: I1020 13:20:12.549501     778 scope.go:117] "RemoveContainer" containerID="9a52597d37680aaabef2e6061eb35daf32af2f425b3c7c316559ccb83d317645"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: I1020 13:20:12.549895     778 scope.go:117] "RemoveContainer" containerID="e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f"
	Oct 20 13:20:12 old-k8s-version-995203 kubelet[778]: E1020 13:20:12.550283     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:21 old-k8s-version-995203 kubelet[778]: I1020 13:20:21.816616     778 scope.go:117] "RemoveContainer" containerID="e95dfbb74add44c7037fbedaace66df071ade22aa5af14a91b95280f51a11e2f"
	Oct 20 13:20:21 old-k8s-version-995203 kubelet[778]: E1020 13:20:21.816924     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-dxgsn_kubernetes-dashboard(51f9001f-f124-4b71-9a9f-d614033d9c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dxgsn" podUID="51f9001f-f124-4b71-9a9f-d614033d9c3c"
	Oct 20 13:20:23 old-k8s-version-995203 kubelet[778]: I1020 13:20:23.595298     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 20 13:20:23 old-k8s-version-995203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:20:23 old-k8s-version-995203 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:20:23 old-k8s-version-995203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d70e7a2fe364404b1bf3b0bb1c7eff9af141659e4d5c48c62445a77a433eec2e] <==
	2025/10/20 13:19:59 Using namespace: kubernetes-dashboard
	2025/10/20 13:19:59 Using in-cluster config to connect to apiserver
	2025/10/20 13:19:59 Using secret token for csrf signing
	2025/10/20 13:19:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:19:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:19:59 Successful initial request to the apiserver, version: v1.28.0
	2025/10/20 13:19:59 Generating JWE encryption key
	2025/10/20 13:19:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:19:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:20:00 Initializing JWE encryption key from synchronized object
	2025/10/20 13:20:00 Creating in-cluster Sidecar client
	2025/10/20 13:20:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:20:00 Serving insecurely on HTTP port: 9090
	2025/10/20 13:19:59 Starting overwatch
	
	
	==> storage-provisioner [08498359d61f644f4b52ac712ce52b9a566408a317365cb6867d4ae77be3b7a1] <==
	I1020 13:19:39.269842       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:20:09.283288       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b9a894360ae536522ed7f07cec598ce802c9de323861ffe9e78dc6dc8622ad05] <==
	I1020 13:20:09.588861       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:20:09.626054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:20:09.626208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1020 13:20:27.027284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:20:27.027460       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-995203_e9b4da6a-1d33-464a-9f3f-e2c8ce04891a!
	I1020 13:20:27.028326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c95c54c-44b3-45ac-9717-b9e1fd84bb4f", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-995203_e9b4da6a-1d33-464a-9f3f-e2c8ce04891a became leader
	I1020 13:20:27.127834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-995203_e9b4da6a-1d33-464a-9f3f-e2c8ce04891a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-995203 -n old-k8s-version-995203
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-995203 -n old-k8s-version-995203: exit status 2 (384.973097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-995203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (473.83436ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:22:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-794175 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-794175 describe deploy/metrics-server -n kube-system: exit status 1 (155.10583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-794175 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-794175
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-794175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4",
	        "Created": "2025-10-20T13:20:37.812533704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:20:37.879999385Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/hosts",
	        "LogPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4-json.log",
	        "Name": "/default-k8s-diff-port-794175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-794175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-794175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4",
	                "LowerDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/merged",
	                "UpperDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/diff",
	                "WorkDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-794175",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-794175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-794175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-794175",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-794175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00757847263d9c5cea10b6f60a4704ea4b922900de62dd726fe0d6860be92dbe",
	            "SandboxKey": "/var/run/docker/netns/00757847263d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-794175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:1a:72:28:c5:80",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f24d9859313beae9adf6bbf4afaf7590ce357fd35e4cb1d30db0d0f40ab82b66",
	                    "EndpointID": "12f75bbcbd830626f0e1d4f866b6041c84a5be59d68d6102553aeb6376f72305",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-794175",
	                        "a83c39bdcf1c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-794175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-794175 logs -n 25: (1.859924363s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cilium-308474                                                                                                                                                                                                                              │ cilium-308474                │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p force-systemd-env-534257 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-534257     │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ delete  │ -p force-systemd-env-534257                                                                                                                                                                                                                   │ force-systemd-env-534257     │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:15 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:15 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-314577    │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-314577    │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ delete  │ -p kubernetes-upgrade-314577                                                                                                                                                                                                                  │ kubernetes-upgrade-314577    │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p cert-options-123220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ cert-options-123220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ -p cert-options-123220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ delete  │ -p cert-options-123220                                                                                                                                                                                                                        │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ stop    │ -p old-k8s-version-995203 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-995203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:21 UTC │
	│ delete  │ -p cert-expiration-066011                                                                                                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:21 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:21:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:21:34.605434  485872 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:21:34.605569  485872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:21:34.605580  485872 out.go:374] Setting ErrFile to fd 2...
	I1020 13:21:34.605585  485872 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:21:34.605846  485872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:21:34.606264  485872 out.go:368] Setting JSON to false
	I1020 13:21:34.607173  485872 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11045,"bootTime":1760955450,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:21:34.607241  485872 start.go:141] virtualization:  
	I1020 13:21:34.610619  485872 out.go:179] * [embed-certs-979197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:21:34.614781  485872 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:21:34.614947  485872 notify.go:220] Checking for updates...
	I1020 13:21:34.620873  485872 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:21:34.623983  485872 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:21:34.626919  485872 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:21:34.629749  485872 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:21:34.632739  485872 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:21:34.636188  485872 config.go:182] Loaded profile config "default-k8s-diff-port-794175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:21:34.636314  485872 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:21:34.673367  485872 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:21:34.673489  485872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:21:34.730526  485872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:21:34.7209196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:21:34.730638  485872 docker.go:318] overlay module found
	I1020 13:21:34.733648  485872 out.go:179] * Using the docker driver based on user configuration
	I1020 13:21:34.736453  485872 start.go:305] selected driver: docker
	I1020 13:21:34.736476  485872 start.go:925] validating driver "docker" against <nil>
	I1020 13:21:34.736491  485872 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:21:34.737221  485872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:21:34.796822  485872 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:21:34.786839585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:21:34.796977  485872 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 13:21:34.797310  485872 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:21:34.800205  485872 out.go:179] * Using Docker driver with root privileges
	I1020 13:21:34.802990  485872 cni.go:84] Creating CNI manager for ""
	I1020 13:21:34.803065  485872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:21:34.803078  485872 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:21:34.803158  485872 start.go:349] cluster config:
	{Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:21:34.806242  485872 out.go:179] * Starting "embed-certs-979197" primary control-plane node in "embed-certs-979197" cluster
	I1020 13:21:34.809215  485872 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:21:34.812063  485872 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:21:34.814734  485872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:21:34.814793  485872 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:21:34.814806  485872 cache.go:58] Caching tarball of preloaded images
	I1020 13:21:34.814854  485872 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:21:34.814971  485872 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:21:34.814984  485872 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:21:34.815094  485872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/config.json ...
	I1020 13:21:34.815121  485872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/config.json: {Name:mkc0dc6fc643ac7292cdabc544a8160cea8b061e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:34.844662  485872 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:21:34.844682  485872 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:21:34.844700  485872 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:21:34.844726  485872 start.go:360] acquireMachinesLock for embed-certs-979197: {Name:mk95b0ada4992492fb672a02a9de970f7541a690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:21:34.844832  485872 start.go:364] duration metric: took 86.081µs to acquireMachinesLock for "embed-certs-979197"
	I1020 13:21:34.844864  485872 start.go:93] Provisioning new machine with config: &{Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:21:34.844934  485872 start.go:125] createHost starting for "" (driver="docker")
	W1020 13:21:33.997945  482452 node_ready.go:57] node "default-k8s-diff-port-794175" has "Ready":"False" status (will retry)
	W1020 13:21:36.495795  482452 node_ready.go:57] node "default-k8s-diff-port-794175" has "Ready":"False" status (will retry)
	I1020 13:21:34.848434  485872 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 13:21:34.848685  485872 start.go:159] libmachine.API.Create for "embed-certs-979197" (driver="docker")
	I1020 13:21:34.848735  485872 client.go:168] LocalClient.Create starting
	I1020 13:21:34.848839  485872 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 13:21:34.848883  485872 main.go:141] libmachine: Decoding PEM data...
	I1020 13:21:34.848907  485872 main.go:141] libmachine: Parsing certificate...
	I1020 13:21:34.848969  485872 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 13:21:34.848994  485872 main.go:141] libmachine: Decoding PEM data...
	I1020 13:21:34.849004  485872 main.go:141] libmachine: Parsing certificate...
	I1020 13:21:34.849373  485872 cli_runner.go:164] Run: docker network inspect embed-certs-979197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 13:21:34.869255  485872 cli_runner.go:211] docker network inspect embed-certs-979197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 13:21:34.869404  485872 network_create.go:284] running [docker network inspect embed-certs-979197] to gather additional debugging logs...
	I1020 13:21:34.869429  485872 cli_runner.go:164] Run: docker network inspect embed-certs-979197
	W1020 13:21:34.887047  485872 cli_runner.go:211] docker network inspect embed-certs-979197 returned with exit code 1
	I1020 13:21:34.887079  485872 network_create.go:287] error running [docker network inspect embed-certs-979197]: docker network inspect embed-certs-979197: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-979197 not found
	I1020 13:21:34.887105  485872 network_create.go:289] output of [docker network inspect embed-certs-979197]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-979197 not found
	
	** /stderr **
	I1020 13:21:34.887198  485872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:21:34.903855  485872 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31214b196961 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:99:57:10:1b:40} reservation:<nil>}
	I1020 13:21:34.904137  485872 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf6e9e751b4a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:0d:2b:68:24:bc} reservation:<nil>}
	I1020 13:21:34.904515  485872 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-076921d0625d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:c5:51:b1:3d:0c} reservation:<nil>}
	I1020 13:21:34.904851  485872 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f24d9859313b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:5d:6b:40:70:f2} reservation:<nil>}
	I1020 13:21:34.905283  485872 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a36390}
	I1020 13:21:34.905304  485872 network_create.go:124] attempt to create docker network embed-certs-979197 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1020 13:21:34.905365  485872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-979197 embed-certs-979197
	I1020 13:21:34.961743  485872 network_create.go:108] docker network embed-certs-979197 192.168.85.0/24 created
	I1020 13:21:34.961776  485872 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-979197" container
	I1020 13:21:34.961867  485872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 13:21:34.979717  485872 cli_runner.go:164] Run: docker volume create embed-certs-979197 --label name.minikube.sigs.k8s.io=embed-certs-979197 --label created_by.minikube.sigs.k8s.io=true
	I1020 13:21:35.000616  485872 oci.go:103] Successfully created a docker volume embed-certs-979197
	I1020 13:21:35.000703  485872 cli_runner.go:164] Run: docker run --rm --name embed-certs-979197-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-979197 --entrypoint /usr/bin/test -v embed-certs-979197:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 13:21:35.561096  485872 oci.go:107] Successfully prepared a docker volume embed-certs-979197
	I1020 13:21:35.561180  485872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:21:35.561207  485872 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 13:21:35.561321  485872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-979197:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1020 13:21:38.496506  482452 node_ready.go:57] node "default-k8s-diff-port-794175" has "Ready":"False" status (will retry)
	W1020 13:21:40.496865  482452 node_ready.go:57] node "default-k8s-diff-port-794175" has "Ready":"False" status (will retry)
	I1020 13:21:39.971991  485872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-979197:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.410605828s)
	I1020 13:21:39.972026  485872 kic.go:203] duration metric: took 4.410814848s to extract preloaded images to volume ...
	W1020 13:21:39.972172  485872 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 13:21:39.972286  485872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 13:21:40.054292  485872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-979197 --name embed-certs-979197 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-979197 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-979197 --network embed-certs-979197 --ip 192.168.85.2 --volume embed-certs-979197:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 13:21:40.380133  485872 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Running}}
	I1020 13:21:40.401899  485872 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:21:40.422663  485872 cli_runner.go:164] Run: docker exec embed-certs-979197 stat /var/lib/dpkg/alternatives/iptables
	I1020 13:21:40.470919  485872 oci.go:144] the created container "embed-certs-979197" has a running status.
	I1020 13:21:40.470954  485872 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa...
	I1020 13:21:41.144216  485872 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 13:21:41.172396  485872 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:21:41.196938  485872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 13:21:41.196958  485872 kic_runner.go:114] Args: [docker exec --privileged embed-certs-979197 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 13:21:41.266018  485872 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:21:41.292241  485872 machine.go:93] provisionDockerMachine start ...
	I1020 13:21:41.292339  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:41.311674  485872 main.go:141] libmachine: Using SSH client type: native
	I1020 13:21:41.312023  485872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1020 13:21:41.312034  485872 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:21:41.482778  485872 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-979197
	
	I1020 13:21:41.482802  485872 ubuntu.go:182] provisioning hostname "embed-certs-979197"
	I1020 13:21:41.482896  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:41.505775  485872 main.go:141] libmachine: Using SSH client type: native
	I1020 13:21:41.506095  485872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1020 13:21:41.506113  485872 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-979197 && echo "embed-certs-979197" | sudo tee /etc/hostname
	I1020 13:21:41.673564  485872 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-979197
	
	I1020 13:21:41.673705  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:41.695199  485872 main.go:141] libmachine: Using SSH client type: native
	I1020 13:21:41.695507  485872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1020 13:21:41.695525  485872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-979197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-979197/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-979197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:21:41.860860  485872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:21:41.860937  485872 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:21:41.860972  485872 ubuntu.go:190] setting up certificates
	I1020 13:21:41.861011  485872 provision.go:84] configureAuth start
	I1020 13:21:41.861135  485872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:21:41.879321  485872 provision.go:143] copyHostCerts
	I1020 13:21:41.879389  485872 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:21:41.879398  485872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:21:41.879502  485872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:21:41.879611  485872 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:21:41.879617  485872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:21:41.879644  485872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:21:41.879699  485872 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:21:41.879703  485872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:21:41.879727  485872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:21:41.879777  485872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-979197 san=[127.0.0.1 192.168.85.2 embed-certs-979197 localhost minikube]
	I1020 13:21:42.270617  485872 provision.go:177] copyRemoteCerts
	I1020 13:21:42.270749  485872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:21:42.270822  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:42.289204  485872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:21:42.396497  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:21:42.414430  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:21:42.432689  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1020 13:21:42.453347  485872 provision.go:87] duration metric: took 592.303872ms to configureAuth
	I1020 13:21:42.453371  485872 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:21:42.453594  485872 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:21:42.453708  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:42.470892  485872 main.go:141] libmachine: Using SSH client type: native
	I1020 13:21:42.471227  485872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1020 13:21:42.471243  485872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:21:42.740528  485872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:21:42.740550  485872 machine.go:96] duration metric: took 1.448289485s to provisionDockerMachine
	I1020 13:21:42.740572  485872 client.go:171] duration metric: took 7.89181185s to LocalClient.Create
	I1020 13:21:42.740586  485872 start.go:167] duration metric: took 7.891903141s to libmachine.API.Create "embed-certs-979197"
	I1020 13:21:42.740595  485872 start.go:293] postStartSetup for "embed-certs-979197" (driver="docker")
	I1020 13:21:42.740605  485872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:21:42.740672  485872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:21:42.740718  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:42.759245  485872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:21:42.868481  485872 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:21:42.872046  485872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:21:42.872089  485872 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:21:42.872100  485872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:21:42.872156  485872 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:21:42.872244  485872 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:21:42.872386  485872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:21:42.882501  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:21:42.900664  485872 start.go:296] duration metric: took 160.055329ms for postStartSetup
	I1020 13:21:42.901029  485872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:21:42.918605  485872 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/config.json ...
	I1020 13:21:42.918888  485872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:21:42.918937  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:42.935793  485872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:21:43.041749  485872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:21:43.047621  485872 start.go:128] duration metric: took 8.202668802s to createHost
	I1020 13:21:43.047645  485872 start.go:83] releasing machines lock for "embed-certs-979197", held for 8.202799092s
	I1020 13:21:43.047721  485872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:21:43.067965  485872 ssh_runner.go:195] Run: cat /version.json
	I1020 13:21:43.068025  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:43.068092  485872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:21:43.068162  485872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:21:43.095463  485872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:21:43.098373  485872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:21:43.200929  485872 ssh_runner.go:195] Run: systemctl --version
	I1020 13:21:43.290004  485872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:21:43.328096  485872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:21:43.332569  485872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:21:43.332662  485872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:21:43.362557  485872 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 13:21:43.362581  485872 start.go:495] detecting cgroup driver to use...
	I1020 13:21:43.362614  485872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:21:43.362668  485872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:21:43.380772  485872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:21:43.393770  485872 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:21:43.393844  485872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:21:43.411086  485872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:21:43.431150  485872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:21:43.579880  485872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:21:43.719305  485872 docker.go:234] disabling docker service ...
	I1020 13:21:43.719447  485872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:21:43.741246  485872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:21:43.754381  485872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:21:43.868938  485872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:21:43.979130  485872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:21:43.992584  485872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:21:44.013931  485872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:21:44.014005  485872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:21:44.023639  485872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:21:44.023714  485872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:21:44.034495  485872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:21:44.044050  485872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:21:44.053577  485872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:21:44.062310  485872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:21:44.071321  485872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:21:44.088159  485872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:21:44.097335  485872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:21:44.105432  485872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:21:44.114061  485872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:21:44.234091  485872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:21:44.370522  485872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:21:44.370620  485872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:21:44.374457  485872 start.go:563] Will wait 60s for crictl version
	I1020 13:21:44.374545  485872 ssh_runner.go:195] Run: which crictl
	I1020 13:21:44.378265  485872 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:21:44.405594  485872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:21:44.405734  485872 ssh_runner.go:195] Run: crio --version
	I1020 13:21:44.436756  485872 ssh_runner.go:195] Run: crio --version
	I1020 13:21:44.473722  485872 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:21:44.476571  485872 cli_runner.go:164] Run: docker network inspect embed-certs-979197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:21:44.503700  485872 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 13:21:44.507717  485872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:21:44.517486  485872 kubeadm.go:883] updating cluster {Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:21:44.517608  485872 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:21:44.517670  485872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:21:44.552904  485872 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:21:44.552924  485872 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:21:44.552988  485872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:21:44.583786  485872 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:21:44.583807  485872 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:21:44.583814  485872 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 13:21:44.583900  485872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-979197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:21:44.583978  485872 ssh_runner.go:195] Run: crio config
	W1020 13:21:42.996429  482452 node_ready.go:57] node "default-k8s-diff-port-794175" has "Ready":"False" status (will retry)
	W1020 13:21:45.497023  482452 node_ready.go:57] node "default-k8s-diff-port-794175" has "Ready":"False" status (will retry)
	I1020 13:21:44.651514  485872 cni.go:84] Creating CNI manager for ""
	I1020 13:21:44.651543  485872 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:21:44.651565  485872 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:21:44.651592  485872 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-979197 NodeName:embed-certs-979197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:21:44.651726  485872 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-979197"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:21:44.651806  485872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:21:44.659849  485872 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:21:44.659921  485872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:21:44.667563  485872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1020 13:21:44.680347  485872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:21:44.692864  485872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1020 13:21:44.707303  485872 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:21:44.711115  485872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:21:44.721075  485872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:21:44.846021  485872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:21:44.863602  485872 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197 for IP: 192.168.85.2
	I1020 13:21:44.863639  485872 certs.go:195] generating shared ca certs ...
	I1020 13:21:44.863672  485872 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:44.863862  485872 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:21:44.863946  485872 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:21:44.863961  485872 certs.go:257] generating profile certs ...
	I1020 13:21:44.864057  485872 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.key
	I1020 13:21:44.864108  485872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.crt with IP's: []
	I1020 13:21:45.022685  485872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.crt ...
	I1020 13:21:45.022726  485872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.crt: {Name:mk28e00c3cefde228287c3d92b2c575d5b0f75a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:45.022962  485872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.key ...
	I1020 13:21:45.022973  485872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.key: {Name:mk0505efe9df085efa10683668ff181972dce9fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:45.023062  485872 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key.78ce9c55
	I1020 13:21:45.023076  485872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt.78ce9c55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1020 13:21:45.256160  485872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt.78ce9c55 ...
	I1020 13:21:45.256197  485872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt.78ce9c55: {Name:mk2d9e757ce0371bd03e2692548b7c02ce444013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:45.256463  485872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key.78ce9c55 ...
	I1020 13:21:45.256483  485872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key.78ce9c55: {Name:mk0940a571c447584ce055dcb7328360525e5677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:45.256587  485872 certs.go:382] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt.78ce9c55 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt
	I1020 13:21:45.256683  485872 certs.go:386] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key.78ce9c55 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key
	I1020 13:21:45.256755  485872 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key
	I1020 13:21:45.256780  485872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.crt with IP's: []
	I1020 13:21:45.380105  485872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.crt ...
	I1020 13:21:45.380135  485872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.crt: {Name:mk2b78209f469182d2b87edb4d42ccd104b99e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:45.380334  485872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key ...
	I1020 13:21:45.380351  485872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key: {Name:mk2544fa07a00f7e66bdd9052aff392fcc46136c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:21:45.380575  485872 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:21:45.380623  485872 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:21:45.380637  485872 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:21:45.380662  485872 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:21:45.380692  485872 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:21:45.380716  485872 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:21:45.380766  485872 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:21:45.381390  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:21:45.406735  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:21:45.429341  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:21:45.449936  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:21:45.468173  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1020 13:21:45.486840  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:21:45.508671  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:21:45.529175  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:21:45.548599  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:21:45.568301  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:21:45.591361  485872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:21:45.619779  485872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:21:45.635307  485872 ssh_runner.go:195] Run: openssl version
	I1020 13:21:45.649257  485872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:21:45.661916  485872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:21:45.666578  485872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:21:45.666657  485872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:21:45.708793  485872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:21:45.716847  485872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:21:45.724774  485872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:21:45.728265  485872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:21:45.728340  485872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:21:45.769071  485872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:21:45.778389  485872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:21:45.786771  485872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:21:45.791057  485872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:21:45.791124  485872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:21:45.835786  485872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:21:45.844743  485872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:21:45.848461  485872 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 13:21:45.848557  485872 kubeadm.go:400] StartCluster: {Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:21:45.848636  485872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:21:45.848695  485872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:21:45.878617  485872 cri.go:89] found id: ""
	I1020 13:21:45.878690  485872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:21:45.886338  485872 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 13:21:45.893953  485872 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 13:21:45.894037  485872 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 13:21:45.901735  485872 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 13:21:45.901806  485872 kubeadm.go:157] found existing configuration files:
	
	I1020 13:21:45.901865  485872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 13:21:45.909506  485872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 13:21:45.909582  485872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 13:21:45.916671  485872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 13:21:45.924246  485872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 13:21:45.924389  485872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 13:21:45.932886  485872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 13:21:45.940297  485872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 13:21:45.940359  485872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 13:21:45.947783  485872 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 13:21:45.955632  485872 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 13:21:45.955751  485872 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 13:21:45.963167  485872 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 13:21:46.030866  485872 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1020 13:21:46.031250  485872 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 13:21:46.104152  485872 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1020 13:21:47.996417  482452 node_ready.go:57] node "default-k8s-diff-port-794175" has "Ready":"False" status (will retry)
	I1020 13:21:49.001276  482452 node_ready.go:49] node "default-k8s-diff-port-794175" is "Ready"
	I1020 13:21:49.001313  482452 node_ready.go:38] duration metric: took 40.508220929s for node "default-k8s-diff-port-794175" to be "Ready" ...
	I1020 13:21:49.001330  482452 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:21:49.001392  482452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:21:49.034454  482452 api_server.go:72] duration metric: took 41.22743777s to wait for apiserver process to appear ...
	I1020 13:21:49.034480  482452 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:21:49.034499  482452 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1020 13:21:49.043434  482452 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1020 13:21:49.044498  482452 api_server.go:141] control plane version: v1.34.1
	I1020 13:21:49.044522  482452 api_server.go:131] duration metric: took 10.035618ms to wait for apiserver health ...
	I1020 13:21:49.044533  482452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:21:49.048122  482452 system_pods.go:59] 8 kube-system pods found
	I1020 13:21:49.048156  482452 system_pods.go:61] "coredns-66bc5c9577-fgxwg" [aad94486-511b-4b40-bb0a-3062658223f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:21:49.048164  482452 system_pods.go:61] "etcd-default-k8s-diff-port-794175" [c00388a1-d23a-4d95-a1a5-26ed572a9b74] Running
	I1020 13:21:49.048170  482452 system_pods.go:61] "kindnet-9w4q8" [1c5ecf5e-1060-4862-bfa6-2ae908741f24] Running
	I1020 13:21:49.048175  482452 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-794175" [2b42d7b6-7d4b-420b-baea-1c5475697fcb] Running
	I1020 13:21:49.048180  482452 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-794175" [62b31ac2-6539-4cd5-aba2-5d5be72da8b1] Running
	I1020 13:21:49.048184  482452 system_pods.go:61] "kube-proxy-jkb75" [6bc104b2-0343-49fc-9c3f-d45e5647f138] Running
	I1020 13:21:49.048189  482452 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-794175" [4ea78e53-7f7d-4aa2-9821-a60e1e38916c] Running
	I1020 13:21:49.048194  482452 system_pods.go:61] "storage-provisioner" [e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:21:49.048201  482452 system_pods.go:74] duration metric: took 3.662324ms to wait for pod list to return data ...
	I1020 13:21:49.048209  482452 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:21:49.050780  482452 default_sa.go:45] found service account: "default"
	I1020 13:21:49.050804  482452 default_sa.go:55] duration metric: took 2.589802ms for default service account to be created ...
	I1020 13:21:49.050813  482452 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:21:49.054263  482452 system_pods.go:86] 8 kube-system pods found
	I1020 13:21:49.054294  482452 system_pods.go:89] "coredns-66bc5c9577-fgxwg" [aad94486-511b-4b40-bb0a-3062658223f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:21:49.054302  482452 system_pods.go:89] "etcd-default-k8s-diff-port-794175" [c00388a1-d23a-4d95-a1a5-26ed572a9b74] Running
	I1020 13:21:49.054308  482452 system_pods.go:89] "kindnet-9w4q8" [1c5ecf5e-1060-4862-bfa6-2ae908741f24] Running
	I1020 13:21:49.054313  482452 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-794175" [2b42d7b6-7d4b-420b-baea-1c5475697fcb] Running
	I1020 13:21:49.054317  482452 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-794175" [62b31ac2-6539-4cd5-aba2-5d5be72da8b1] Running
	I1020 13:21:49.054322  482452 system_pods.go:89] "kube-proxy-jkb75" [6bc104b2-0343-49fc-9c3f-d45e5647f138] Running
	I1020 13:21:49.054326  482452 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-794175" [4ea78e53-7f7d-4aa2-9821-a60e1e38916c] Running
	I1020 13:21:49.054331  482452 system_pods.go:89] "storage-provisioner" [e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:21:49.054359  482452 retry.go:31] will retry after 196.414081ms: missing components: kube-dns
	I1020 13:21:49.271400  482452 system_pods.go:86] 8 kube-system pods found
	I1020 13:21:49.271492  482452 system_pods.go:89] "coredns-66bc5c9577-fgxwg" [aad94486-511b-4b40-bb0a-3062658223f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:21:49.271519  482452 system_pods.go:89] "etcd-default-k8s-diff-port-794175" [c00388a1-d23a-4d95-a1a5-26ed572a9b74] Running
	I1020 13:21:49.271559  482452 system_pods.go:89] "kindnet-9w4q8" [1c5ecf5e-1060-4862-bfa6-2ae908741f24] Running
	I1020 13:21:49.271582  482452 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-794175" [2b42d7b6-7d4b-420b-baea-1c5475697fcb] Running
	I1020 13:21:49.271601  482452 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-794175" [62b31ac2-6539-4cd5-aba2-5d5be72da8b1] Running
	I1020 13:21:49.271623  482452 system_pods.go:89] "kube-proxy-jkb75" [6bc104b2-0343-49fc-9c3f-d45e5647f138] Running
	I1020 13:21:49.271644  482452 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-794175" [4ea78e53-7f7d-4aa2-9821-a60e1e38916c] Running
	I1020 13:21:49.271683  482452 system_pods.go:89] "storage-provisioner" [e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:21:49.271713  482452 retry.go:31] will retry after 346.269987ms: missing components: kube-dns
	I1020 13:21:49.624246  482452 system_pods.go:86] 8 kube-system pods found
	I1020 13:21:49.624352  482452 system_pods.go:89] "coredns-66bc5c9577-fgxwg" [aad94486-511b-4b40-bb0a-3062658223f3] Running
	I1020 13:21:49.624394  482452 system_pods.go:89] "etcd-default-k8s-diff-port-794175" [c00388a1-d23a-4d95-a1a5-26ed572a9b74] Running
	I1020 13:21:49.624436  482452 system_pods.go:89] "kindnet-9w4q8" [1c5ecf5e-1060-4862-bfa6-2ae908741f24] Running
	I1020 13:21:49.624459  482452 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-794175" [2b42d7b6-7d4b-420b-baea-1c5475697fcb] Running
	I1020 13:21:49.624479  482452 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-794175" [62b31ac2-6539-4cd5-aba2-5d5be72da8b1] Running
	I1020 13:21:49.624520  482452 system_pods.go:89] "kube-proxy-jkb75" [6bc104b2-0343-49fc-9c3f-d45e5647f138] Running
	I1020 13:21:49.624545  482452 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-794175" [4ea78e53-7f7d-4aa2-9821-a60e1e38916c] Running
	I1020 13:21:49.624574  482452 system_pods.go:89] "storage-provisioner" [e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6] Running
	I1020 13:21:49.624597  482452 system_pods.go:126] duration metric: took 573.777288ms to wait for k8s-apps to be running ...
	I1020 13:21:49.624636  482452 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:21:49.624758  482452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:21:49.641661  482452 system_svc.go:56] duration metric: took 17.016484ms WaitForService to wait for kubelet
	I1020 13:21:49.641736  482452 kubeadm.go:586] duration metric: took 41.834726074s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:21:49.641772  482452 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:21:49.650939  482452 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:21:49.651038  482452 node_conditions.go:123] node cpu capacity is 2
	I1020 13:21:49.651079  482452 node_conditions.go:105] duration metric: took 9.282689ms to run NodePressure ...
	I1020 13:21:49.651124  482452 start.go:241] waiting for startup goroutines ...
	I1020 13:21:49.651164  482452 start.go:246] waiting for cluster config update ...
	I1020 13:21:49.651210  482452 start.go:255] writing updated cluster config ...
	I1020 13:21:49.651698  482452 ssh_runner.go:195] Run: rm -f paused
	I1020 13:21:49.661903  482452 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:21:49.671181  482452 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fgxwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:49.681212  482452 pod_ready.go:94] pod "coredns-66bc5c9577-fgxwg" is "Ready"
	I1020 13:21:49.681299  482452 pod_ready.go:86] duration metric: took 10.026912ms for pod "coredns-66bc5c9577-fgxwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:49.688153  482452 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:49.695480  482452 pod_ready.go:94] pod "etcd-default-k8s-diff-port-794175" is "Ready"
	I1020 13:21:49.695559  482452 pod_ready.go:86] duration metric: took 7.32437ms for pod "etcd-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:49.698947  482452 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:49.706035  482452 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-794175" is "Ready"
	I1020 13:21:49.706123  482452 pod_ready.go:86] duration metric: took 7.083325ms for pod "kube-apiserver-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:49.709573  482452 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:50.067036  482452 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-794175" is "Ready"
	I1020 13:21:50.067115  482452 pod_ready.go:86] duration metric: took 357.457233ms for pod "kube-controller-manager-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:50.266900  482452 pod_ready.go:83] waiting for pod "kube-proxy-jkb75" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:50.666819  482452 pod_ready.go:94] pod "kube-proxy-jkb75" is "Ready"
	I1020 13:21:50.666907  482452 pod_ready.go:86] duration metric: took 399.930819ms for pod "kube-proxy-jkb75" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:50.867639  482452 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:51.267315  482452 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-794175" is "Ready"
	I1020 13:21:51.267394  482452 pod_ready.go:86] duration metric: took 399.652006ms for pod "kube-scheduler-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:21:51.267428  482452 pod_ready.go:40] duration metric: took 1.605421665s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:21:51.354350  482452 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:21:51.358306  482452 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-794175" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 13:21:49 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:49.208643346Z" level=info msg="Created container 3b676bd18a75407a65e321d551be1e6eae0bae5b091e3a13483e9b1de31056d6: kube-system/coredns-66bc5c9577-fgxwg/coredns" id=9410b9a2-2023-4e75-a239-6c5066f566a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:21:49 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:49.209564737Z" level=info msg="Starting container: 3b676bd18a75407a65e321d551be1e6eae0bae5b091e3a13483e9b1de31056d6" id=3df85651-a1f4-4b52-bb6b-75bd0d64be93 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:21:49 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:49.211096564Z" level=info msg="Started container" PID=1742 containerID=3b676bd18a75407a65e321d551be1e6eae0bae5b091e3a13483e9b1de31056d6 description=kube-system/coredns-66bc5c9577-fgxwg/coredns id=3df85651-a1f4-4b52-bb6b-75bd0d64be93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82264321f94d267c9616203c33db56a1e92587757f07a3bb1bc300e795af86b0
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.946205168Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9f0c0eef-e82b-4ee9-8ee5-314c140ab25c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.946276299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.952777561Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:37b78f9dd94f9d6e9d18b101e5dfc3dcf257358669f09e90731b04375283dfa2 UID:630ece87-4be2-448f-b9d0-4e832072a0c4 NetNS:/var/run/netns/b740bd9f-38bc-413d-ada6-ebf231da0d89 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d4e0}] Aliases:map[]}"
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.95283599Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.965278409Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:37b78f9dd94f9d6e9d18b101e5dfc3dcf257358669f09e90731b04375283dfa2 UID:630ece87-4be2-448f-b9d0-4e832072a0c4 NetNS:/var/run/netns/b740bd9f-38bc-413d-ada6-ebf231da0d89 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d4e0}] Aliases:map[]}"
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.965596345Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.974460542Z" level=info msg="Ran pod sandbox 37b78f9dd94f9d6e9d18b101e5dfc3dcf257358669f09e90731b04375283dfa2 with infra container: default/busybox/POD" id=9f0c0eef-e82b-4ee9-8ee5-314c140ab25c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.977864386Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d56e4682-9853-4908-959b-e616ff8943fb name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.978200817Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d56e4682-9853-4908-959b-e616ff8943fb name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.978307197Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d56e4682-9853-4908-959b-e616ff8943fb name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.981814049Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2d0da7c5-51b8-4508-83d3-0e415fff3c50 name=/runtime.v1.ImageService/PullImage
	Oct 20 13:21:51 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:51.985699581Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 13:21:53 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:53.985378046Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=2d0da7c5-51b8-4508-83d3-0e415fff3c50 name=/runtime.v1.ImageService/PullImage
	Oct 20 13:21:53 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:53.986595613Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1b2dfc32-ddb8-4ccc-9a91-cb4dfc3fe209 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:21:53 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:53.990775917Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4109e2ea-e34d-43d0-99e3-d0960804da64 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:21:53 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:53.99833339Z" level=info msg="Creating container: default/busybox/busybox" id=59d732ea-6116-42cd-8eb3-fd63fced5885 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:21:53 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:53.998640783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:21:54 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:54.004474762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:21:54 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:54.005205963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:21:54 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:54.028491099Z" level=info msg="Created container 01b9c3855893c7771b3eeb7cfb5abd127ee96ed865576f48ce9466897eb24a2f: default/busybox/busybox" id=59d732ea-6116-42cd-8eb3-fd63fced5885 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:21:54 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:54.032790051Z" level=info msg="Starting container: 01b9c3855893c7771b3eeb7cfb5abd127ee96ed865576f48ce9466897eb24a2f" id=9c13c016-09a3-4ece-bce2-e70ff8218b30 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:21:54 default-k8s-diff-port-794175 crio[839]: time="2025-10-20T13:21:54.043375551Z" level=info msg="Started container" PID=1798 containerID=01b9c3855893c7771b3eeb7cfb5abd127ee96ed865576f48ce9466897eb24a2f description=default/busybox/busybox id=9c13c016-09a3-4ece-bce2-e70ff8218b30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=37b78f9dd94f9d6e9d18b101e5dfc3dcf257358669f09e90731b04375283dfa2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	01b9c3855893c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   37b78f9dd94f9       busybox                                                default
	3b676bd18a754       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   82264321f94d2       coredns-66bc5c9577-fgxwg                               kube-system
	cda049e26c5d4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   590ab872b924d       storage-provisioner                                    kube-system
	76dcc23093579       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   c2ea991c15ac1       kube-proxy-jkb75                                       kube-system
	669214dacd34a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   166bb3abbefb1       kindnet-9w4q8                                          kube-system
	debc48233dc32       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   4b5331cf8c945       kube-controller-manager-default-k8s-diff-port-794175   kube-system
	94a6f34d395a1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   406957fcb1271       kube-scheduler-default-k8s-diff-port-794175            kube-system
	529aa7ea06e0a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   73561d6c70104       etcd-default-k8s-diff-port-794175                      kube-system
	0f90e3a109062       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   aeacf94114993       kube-apiserver-default-k8s-diff-port-794175            kube-system
	
	
	==> coredns [3b676bd18a75407a65e321d551be1e6eae0bae5b091e3a13483e9b1de31056d6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33099 - 36449 "HINFO IN 5997399889000919344.8620822382560693192. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030108503s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-794175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-794175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=default-k8s-diff-port-794175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_21_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:20:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-794175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:21:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:21:52 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:21:52 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:21:52 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:21:52 +0000   Mon, 20 Oct 2025 13:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-794175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e9dbb7f7-719c-4a64-84f6-74d2f47cffc5
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-fgxwg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-794175                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-9w4q8                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-794175             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-794175    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-jkb75                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-794175             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 60s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s   node-controller  Node default-k8s-diff-port-794175 event: Registered Node default-k8s-diff-port-794175 in Controller
	  Normal   NodeReady                14s   kubelet          Node default-k8s-diff-port-794175 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct20 12:57] overlayfs: idmapped layers are currently not supported
	[Oct20 12:58] overlayfs: idmapped layers are currently not supported
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [529aa7ea06e0a484225f9d70853061b17df8d180ef0b6ac3a25bba56a06eca88] <==
	{"level":"warn","ts":"2025-10-20T13:20:58.013533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.033089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.055332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.082478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.082725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.102314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.122220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.136677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.161357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.193162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.212614Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.264947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.281827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.305320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.317780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.347656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.360708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.381432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.400837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.413275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.435014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.457333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.474037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.496475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:20:58.581778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46468","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:22:02 up  3:04,  0 user,  load average: 2.29, 2.59, 2.44
	Linux default-k8s-diff-port-794175 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [669214dacd34a2f153803b6d4858d55251231982191562d26f751523b89bac98] <==
	I1020 13:21:08.302658       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:21:08.303055       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:21:08.303180       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:21:08.303192       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:21:08.303205       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:21:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:21:08.510362       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:21:08.510389       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:21:08.510398       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:21:08.510524       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:21:38.510027       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:21:38.510135       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 13:21:38.511471       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:21:38.511503       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1020 13:21:39.811007       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:21:39.811043       1 metrics.go:72] Registering metrics
	I1020 13:21:39.811118       1 controller.go:711] "Syncing nftables rules"
	I1020 13:21:48.516825       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:21:48.516874       1 main.go:301] handling current node
	I1020 13:21:58.509886       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:21:58.510013       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f90e3a1090622ea6414e2d9a4ced76a0950bea82e4e9b0056d09784f9814aeb] <==
	I1020 13:20:59.467336       1 policy_source.go:240] refreshing policies
	E1020 13:20:59.496782       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1020 13:20:59.510776       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:20:59.554953       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:20:59.555079       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 13:20:59.568291       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:20:59.571847       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:20:59.646290       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:21:00.234584       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 13:21:00.251356       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 13:21:00.251380       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:21:01.087693       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:21:01.144252       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:21:01.219580       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 13:21:01.228463       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1020 13:21:01.229650       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:21:01.235005       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:21:01.369236       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:21:02.147889       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:21:02.162534       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 13:21:02.177876       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 13:21:06.383974       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:21:06.389406       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:21:06.721203       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:21:07.119836       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [debc48233dc3276972e21e012581b6a6bf2955deadb1b803ab01c324f8d91a43] <==
	I1020 13:21:06.411038       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:21:06.412405       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:21:06.413541       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:21:06.413622       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 13:21:06.414601       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 13:21:06.414648       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:21:06.414690       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:21:06.414634       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 13:21:06.414885       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 13:21:06.415210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:21:06.415251       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:21:06.415295       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 13:21:06.416017       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:21:06.416031       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:21:06.416268       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:21:06.417161       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 13:21:06.421416       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 13:21:06.422644       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:21:06.424845       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:21:06.440182       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:21:06.446509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:21:06.466147       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:21:06.466172       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:21:06.466180       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:21:51.402527       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [76dcc230935790f69a5abd6ed2c0e66198882d9a9dc82c461d6d8d8a94f4a897] <==
	I1020 13:21:08.349091       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:21:08.426189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:21:08.526740       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:21:08.533865       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:21:08.535429       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:21:08.671557       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:21:08.671714       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:21:08.677770       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:21:08.678158       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:21:08.678345       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:21:08.679583       1 config.go:200] "Starting service config controller"
	I1020 13:21:08.679650       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:21:08.679693       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:21:08.679719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:21:08.679770       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:21:08.679797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:21:08.681199       1 config.go:309] "Starting node config controller"
	I1020 13:21:08.681873       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:21:08.681930       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:21:08.780470       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:21:08.780483       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:21:08.780501       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [94a6f34d395a1eb8fee2fd0b96895d9528858f764cddfde541b365ea0f1e9728] <==
	E1020 13:20:59.379174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 13:20:59.379424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 13:20:59.379835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 13:20:59.379897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 13:20:59.380020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 13:20:59.380019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 13:21:00.188992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 13:21:00.228208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 13:21:00.248425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1020 13:21:00.248630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 13:21:00.261742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 13:21:00.299193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 13:21:00.321742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 13:21:00.408706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 13:21:00.534747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 13:21:00.601570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 13:21:00.607565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 13:21:00.607732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 13:21:00.621805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 13:21:00.737339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 13:21:00.755319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 13:21:00.773615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 13:21:00.810213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 13:21:00.845580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1020 13:21:03.058291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:07.247036    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1c5ecf5e-1060-4862-bfa6-2ae908741f24-cni-cfg\") pod \"kindnet-9w4q8\" (UID: \"1c5ecf5e-1060-4862-bfa6-2ae908741f24\") " pod="kube-system/kindnet-9w4q8"
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:07.247071    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c5ecf5e-1060-4862-bfa6-2ae908741f24-xtables-lock\") pod \"kindnet-9w4q8\" (UID: \"1c5ecf5e-1060-4862-bfa6-2ae908741f24\") " pod="kube-system/kindnet-9w4q8"
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:07.247113    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz8ng\" (UniqueName: \"kubernetes.io/projected/6bc104b2-0343-49fc-9c3f-d45e5647f138-kube-api-access-xz8ng\") pod \"kube-proxy-jkb75\" (UID: \"6bc104b2-0343-49fc-9c3f-d45e5647f138\") " pod="kube-system/kube-proxy-jkb75"
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: E1020 13:21:07.361611    1298 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: E1020 13:21:07.361657    1298 projected.go:196] Error preparing data for projected volume kube-api-access-r2crm for pod kube-system/kindnet-9w4q8: configmap "kube-root-ca.crt" not found
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: E1020 13:21:07.361739    1298 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c5ecf5e-1060-4862-bfa6-2ae908741f24-kube-api-access-r2crm podName:1c5ecf5e-1060-4862-bfa6-2ae908741f24 nodeName:}" failed. No retries permitted until 2025-10-20 13:21:07.861706569 +0000 UTC m=+5.877091840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r2crm" (UniqueName: "kubernetes.io/projected/1c5ecf5e-1060-4862-bfa6-2ae908741f24-kube-api-access-r2crm") pod "kindnet-9w4q8" (UID: "1c5ecf5e-1060-4862-bfa6-2ae908741f24") : configmap "kube-root-ca.crt" not found
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: E1020 13:21:07.363109    1298 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: E1020 13:21:07.363152    1298 projected.go:196] Error preparing data for projected volume kube-api-access-xz8ng for pod kube-system/kube-proxy-jkb75: configmap "kube-root-ca.crt" not found
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: E1020 13:21:07.363202    1298 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6bc104b2-0343-49fc-9c3f-d45e5647f138-kube-api-access-xz8ng podName:6bc104b2-0343-49fc-9c3f-d45e5647f138 nodeName:}" failed. No retries permitted until 2025-10-20 13:21:07.863184939 +0000 UTC m=+5.878570218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xz8ng" (UniqueName: "kubernetes.io/projected/6bc104b2-0343-49fc-9c3f-d45e5647f138-kube-api-access-xz8ng") pod "kube-proxy-jkb75" (UID: "6bc104b2-0343-49fc-9c3f-d45e5647f138") : configmap "kube-root-ca.crt" not found
	Oct 20 13:21:07 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:07.864020    1298 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:21:08 default-k8s-diff-port-794175 kubelet[1298]: W1020 13:21:08.101081    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-166bb3abbefb1bc18cc32580c400aeca69a739abfc8fb24a9fa0217659173076 WatchSource:0}: Error finding container 166bb3abbefb1bc18cc32580c400aeca69a739abfc8fb24a9fa0217659173076: Status 404 returned error can't find the container with id 166bb3abbefb1bc18cc32580c400aeca69a739abfc8fb24a9fa0217659173076
	Oct 20 13:21:08 default-k8s-diff-port-794175 kubelet[1298]: W1020 13:21:08.136089    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-c2ea991c15ac1b03066402a3b35335a1169cb27fe0b6306bedaf22ac71223486 WatchSource:0}: Error finding container c2ea991c15ac1b03066402a3b35335a1169cb27fe0b6306bedaf22ac71223486: Status 404 returned error can't find the container with id c2ea991c15ac1b03066402a3b35335a1169cb27fe0b6306bedaf22ac71223486
	Oct 20 13:21:09 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:09.269767    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9w4q8" podStartSLOduration=2.26974818 podStartE2EDuration="2.26974818s" podCreationTimestamp="2025-10-20 13:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:21:08.264946955 +0000 UTC m=+6.280332242" watchObservedRunningTime="2025-10-20 13:21:09.26974818 +0000 UTC m=+7.285133459"
	Oct 20 13:21:12 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:12.212134    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jkb75" podStartSLOduration=5.212112754 podStartE2EDuration="5.212112754s" podCreationTimestamp="2025-10-20 13:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:21:09.270399331 +0000 UTC m=+7.285784643" watchObservedRunningTime="2025-10-20 13:21:12.212112754 +0000 UTC m=+10.227498049"
	Oct 20 13:21:48 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:48.730958    1298 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 20 13:21:48 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:48.870152    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6-tmp\") pod \"storage-provisioner\" (UID: \"e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6\") " pod="kube-system/storage-provisioner"
	Oct 20 13:21:48 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:48.870376    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87g6g\" (UniqueName: \"kubernetes.io/projected/aad94486-511b-4b40-bb0a-3062658223f3-kube-api-access-87g6g\") pod \"coredns-66bc5c9577-fgxwg\" (UID: \"aad94486-511b-4b40-bb0a-3062658223f3\") " pod="kube-system/coredns-66bc5c9577-fgxwg"
	Oct 20 13:21:48 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:48.870489    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt6h7\" (UniqueName: \"kubernetes.io/projected/e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6-kube-api-access-qt6h7\") pod \"storage-provisioner\" (UID: \"e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6\") " pod="kube-system/storage-provisioner"
	Oct 20 13:21:48 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:48.870591    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aad94486-511b-4b40-bb0a-3062658223f3-config-volume\") pod \"coredns-66bc5c9577-fgxwg\" (UID: \"aad94486-511b-4b40-bb0a-3062658223f3\") " pod="kube-system/coredns-66bc5c9577-fgxwg"
	Oct 20 13:21:49 default-k8s-diff-port-794175 kubelet[1298]: W1020 13:21:49.092312    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-590ab872b924d7f4fdece0ebd78054ef5211261c7c37a1672bdc44223fe3fc4a WatchSource:0}: Error finding container 590ab872b924d7f4fdece0ebd78054ef5211261c7c37a1672bdc44223fe3fc4a: Status 404 returned error can't find the container with id 590ab872b924d7f4fdece0ebd78054ef5211261c7c37a1672bdc44223fe3fc4a
	Oct 20 13:21:49 default-k8s-diff-port-794175 kubelet[1298]: W1020 13:21:49.142057    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-82264321f94d267c9616203c33db56a1e92587757f07a3bb1bc300e795af86b0 WatchSource:0}: Error finding container 82264321f94d267c9616203c33db56a1e92587757f07a3bb1bc300e795af86b0: Status 404 returned error can't find the container with id 82264321f94d267c9616203c33db56a1e92587757f07a3bb1bc300e795af86b0
	Oct 20 13:21:49 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:49.403154    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fgxwg" podStartSLOduration=42.403133882 podStartE2EDuration="42.403133882s" podCreationTimestamp="2025-10-20 13:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:21:49.368658054 +0000 UTC m=+47.384043349" watchObservedRunningTime="2025-10-20 13:21:49.403133882 +0000 UTC m=+47.418519161"
	Oct 20 13:21:49 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:49.430862    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.43084255 podStartE2EDuration="41.43084255s" podCreationTimestamp="2025-10-20 13:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:21:49.403633778 +0000 UTC m=+47.419019065" watchObservedRunningTime="2025-10-20 13:21:49.43084255 +0000 UTC m=+47.446227829"
	Oct 20 13:21:51 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:51.801904    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glm5f\" (UniqueName: \"kubernetes.io/projected/630ece87-4be2-448f-b9d0-4e832072a0c4-kube-api-access-glm5f\") pod \"busybox\" (UID: \"630ece87-4be2-448f-b9d0-4e832072a0c4\") " pod="default/busybox"
	Oct 20 13:21:54 default-k8s-diff-port-794175 kubelet[1298]: I1020 13:21:54.377846    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.368974366 podStartE2EDuration="3.37781981s" podCreationTimestamp="2025-10-20 13:21:51 +0000 UTC" firstStartedPulling="2025-10-20 13:21:51.978732532 +0000 UTC m=+49.994117819" lastFinishedPulling="2025-10-20 13:21:53.987577984 +0000 UTC m=+52.002963263" observedRunningTime="2025-10-20 13:21:54.377303595 +0000 UTC m=+52.392688873" watchObservedRunningTime="2025-10-20 13:21:54.37781981 +0000 UTC m=+52.393205089"
	
	
	==> storage-provisioner [cda049e26c5d4805d0fdae4d26d65de490b6be3b9175fbbd2e75298980cf7f11] <==
	I1020 13:21:49.197469       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:21:49.289128       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:21:49.289259       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:21:49.300538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:49.306771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:21:49.307161       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:21:49.307624       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"64224ef5-8dba-4cbf-9a3f-49d2b765cfef", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-794175_5785fb45-85b1-489d-b4be-4d374997eb5a became leader
	I1020 13:21:49.313379       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-794175_5785fb45-85b1-489d-b4be-4d374997eb5a!
	W1020 13:21:49.328433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:49.336081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:21:49.416451       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-794175_5785fb45-85b1-489d-b4be-4d374997eb5a!
	W1020 13:21:51.340572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:51.357474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:53.361748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:53.367499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:55.371430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:55.378227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:57.381962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:57.387682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:59.391171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:21:59.399833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:01.407006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:01.422539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-794175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (310.332713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:22:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-979197 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-979197 describe deploy/metrics-server -n kube-system: exit status 1 (121.736057ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-979197 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-979197
helpers_test.go:243: (dbg) docker inspect embed-certs-979197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b",
	        "Created": "2025-10-20T13:21:40.070634794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:21:40.150987887Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/hosts",
	        "LogPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b-json.log",
	        "Name": "/embed-certs-979197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-979197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-979197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b",
	                "LowerDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-979197",
	                "Source": "/var/lib/docker/volumes/embed-certs-979197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-979197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-979197",
	                "name.minikube.sigs.k8s.io": "embed-certs-979197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d17d3ea0f3674068eb0d90608e6f951b6b2e6a4c111863934d33e629fdc46b5e",
	            "SandboxKey": "/var/run/docker/netns/d17d3ea0f367",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-979197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:f4:3b:24:db:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bde21224527a25cf82271eb68321115d5ca91f933b235b8b28a8c48a7e3f01e5",
	                    "EndpointID": "351410d4ee28f9f1119156387a80a206e51d12bd5b18b2bbdb76bd8739031ebb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-979197",
	                        "737cd86e9d78"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-979197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-979197 logs -n 25: (1.573587423s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-314577    │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │                     │
	│ start   │ -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-314577    │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ delete  │ -p kubernetes-upgrade-314577                                                                                                                                                                                                                  │ kubernetes-upgrade-314577    │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:16 UTC │
	│ start   │ -p cert-options-123220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:16 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ cert-options-123220 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ ssh     │ -p cert-options-123220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ delete  │ -p cert-options-123220                                                                                                                                                                                                                        │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ stop    │ -p old-k8s-version-995203 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-995203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:21 UTC │
	│ delete  │ -p cert-expiration-066011                                                                                                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:21 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:22:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:22:16.245194  489182 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:22:16.245320  489182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:22:16.245330  489182 out.go:374] Setting ErrFile to fd 2...
	I1020 13:22:16.245336  489182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:22:16.245611  489182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:22:16.246023  489182 out.go:368] Setting JSON to false
	I1020 13:22:16.247180  489182 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11087,"bootTime":1760955450,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:22:16.247256  489182 start.go:141] virtualization:  
	I1020 13:22:16.252261  489182 out.go:179] * [default-k8s-diff-port-794175] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:22:16.255651  489182 notify.go:220] Checking for updates...
	I1020 13:22:16.256213  489182 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:22:16.259811  489182 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:22:16.262849  489182 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:22:16.265908  489182 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:22:16.268963  489182 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:22:16.272058  489182 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:22:16.275696  489182 config.go:182] Loaded profile config "default-k8s-diff-port-794175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:22:16.276618  489182 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:22:16.306868  489182 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:22:16.307178  489182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:22:16.382434  489182 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:22:16.371535542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:22:16.382556  489182 docker.go:318] overlay module found
	I1020 13:22:16.387663  489182 out.go:179] * Using the docker driver based on existing profile
	I1020 13:22:16.390682  489182 start.go:305] selected driver: docker
	I1020 13:22:16.390745  489182 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-794175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-794175 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:22:16.390855  489182 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:22:16.391628  489182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:22:16.456355  489182 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:22:16.446639685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:22:16.456782  489182 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:22:16.456810  489182 cni.go:84] Creating CNI manager for ""
	I1020 13:22:16.456871  489182 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:22:16.456915  489182 start.go:349] cluster config:
	{Name:default-k8s-diff-port-794175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-794175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:22:16.468436  489182 out.go:179] * Starting "default-k8s-diff-port-794175" primary control-plane node in "default-k8s-diff-port-794175" cluster
	I1020 13:22:16.472667  489182 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:22:16.475671  489182 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:22:16.478705  489182 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:22:16.478783  489182 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:22:16.478797  489182 cache.go:58] Caching tarball of preloaded images
	I1020 13:22:16.478806  489182 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:22:16.478896  489182 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:22:16.478906  489182 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:22:16.479017  489182 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/config.json ...
	I1020 13:22:16.504936  489182 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:22:16.504963  489182 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:22:16.504978  489182 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:22:16.505002  489182 start.go:360] acquireMachinesLock for default-k8s-diff-port-794175: {Name:mk9b6a4a43a929e914bf1e71003ee98f924dd735 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:22:16.505063  489182 start.go:364] duration metric: took 44.054µs to acquireMachinesLock for "default-k8s-diff-port-794175"
	I1020 13:22:16.505090  489182 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:22:16.505099  489182 fix.go:54] fixHost starting: 
	I1020 13:22:16.505780  489182 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-794175 --format={{.State.Status}}
	I1020 13:22:16.524823  489182 fix.go:112] recreateIfNeeded on default-k8s-diff-port-794175: state=Stopped err=<nil>
	W1020 13:22:16.524855  489182 fix.go:138] unexpected machine state, will restart: <nil>
	W1020 13:22:16.012695  485872 node_ready.go:57] node "embed-certs-979197" has "Ready":"False" status (will retry)
	W1020 13:22:18.018691  485872 node_ready.go:57] node "embed-certs-979197" has "Ready":"False" status (will retry)
	I1020 13:22:16.527882  489182 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-794175" ...
	I1020 13:22:16.527973  489182 cli_runner.go:164] Run: docker start default-k8s-diff-port-794175
	I1020 13:22:16.813268  489182 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-794175 --format={{.State.Status}}
	I1020 13:22:16.835114  489182 kic.go:430] container "default-k8s-diff-port-794175" state is running.
	I1020 13:22:16.835617  489182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-794175
	I1020 13:22:16.861792  489182 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/config.json ...
	I1020 13:22:16.862019  489182 machine.go:93] provisionDockerMachine start ...
	I1020 13:22:16.862080  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:16.887225  489182 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:16.887558  489182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1020 13:22:16.887569  489182 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:22:16.888306  489182 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:22:20.036227  489182 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-794175
	
	I1020 13:22:20.036256  489182 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-794175"
	I1020 13:22:20.036333  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:20.054145  489182 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:20.054471  489182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1020 13:22:20.054491  489182 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-794175 && echo "default-k8s-diff-port-794175" | sudo tee /etc/hostname
	I1020 13:22:20.219863  489182 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-794175
	
	I1020 13:22:20.219952  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:20.240397  489182 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:20.240729  489182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1020 13:22:20.240764  489182 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-794175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-794175/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-794175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:22:20.397780  489182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:22:20.397848  489182 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:22:20.397875  489182 ubuntu.go:190] setting up certificates
	I1020 13:22:20.397885  489182 provision.go:84] configureAuth start
	I1020 13:22:20.398077  489182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-794175
	I1020 13:22:20.415422  489182 provision.go:143] copyHostCerts
	I1020 13:22:20.415511  489182 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:22:20.415534  489182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:22:20.415611  489182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:22:20.415715  489182 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:22:20.415728  489182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:22:20.415761  489182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:22:20.415829  489182 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:22:20.415839  489182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:22:20.415869  489182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:22:20.415929  489182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-794175 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-794175 localhost minikube]
	I1020 13:22:20.965518  489182 provision.go:177] copyRemoteCerts
	I1020 13:22:20.965606  489182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:22:20.965647  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:20.985520  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:21.096193  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:22:21.114998  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1020 13:22:21.133146  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:22:21.152322  489182 provision.go:87] duration metric: took 754.422799ms to configureAuth
	I1020 13:22:21.152347  489182 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:22:21.152603  489182 config.go:182] Loaded profile config "default-k8s-diff-port-794175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:22:21.152752  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:21.170520  489182 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:21.170846  489182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1020 13:22:21.170869  489182 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:22:21.489180  489182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:22:21.489243  489182 machine.go:96] duration metric: took 4.627213883s to provisionDockerMachine
	I1020 13:22:21.489272  489182 start.go:293] postStartSetup for "default-k8s-diff-port-794175" (driver="docker")
	I1020 13:22:21.489315  489182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:22:21.489397  489182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:22:21.489473  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:21.515910  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:21.625225  489182 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:22:21.628907  489182 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:22:21.628943  489182 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:22:21.628977  489182 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:22:21.629059  489182 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:22:21.629177  489182 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:22:21.629307  489182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:22:21.636867  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:22:21.655567  489182 start.go:296] duration metric: took 166.260171ms for postStartSetup
	I1020 13:22:21.655696  489182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:22:21.655768  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:21.672997  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:21.773147  489182 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:22:21.778114  489182 fix.go:56] duration metric: took 5.273008104s for fixHost
	I1020 13:22:21.778188  489182 start.go:83] releasing machines lock for "default-k8s-diff-port-794175", held for 5.273112524s
	I1020 13:22:21.778287  489182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-794175
	I1020 13:22:21.794891  489182 ssh_runner.go:195] Run: cat /version.json
	I1020 13:22:21.794941  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:21.794960  489182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:22:21.795084  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:21.813322  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:21.831770  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:21.920279  489182 ssh_runner.go:195] Run: systemctl --version
	I1020 13:22:22.018009  489182 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:22:22.061886  489182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:22:22.066452  489182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:22:22.066550  489182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:22:22.074948  489182 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:22:22.074976  489182 start.go:495] detecting cgroup driver to use...
	I1020 13:22:22.075011  489182 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:22:22.075062  489182 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:22:22.091918  489182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:22:22.105782  489182 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:22:22.105893  489182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:22:22.122417  489182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:22:22.135891  489182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:22:22.249497  489182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:22:22.374358  489182 docker.go:234] disabling docker service ...
	I1020 13:22:22.374487  489182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:22:22.389649  489182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:22:22.402799  489182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:22:22.525369  489182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:22:22.652725  489182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:22:22.666173  489182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:22:22.681830  489182 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:22:22.681916  489182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:22.691228  489182 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:22:22.691307  489182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:22.700154  489182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:22.709063  489182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:22.717978  489182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:22:22.728614  489182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:22.738285  489182 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:22.747030  489182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:22.756481  489182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:22:22.763817  489182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:22:22.771142  489182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:22:22.894233  489182 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:22:23.088424  489182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:22:23.088506  489182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:22:23.092627  489182 start.go:563] Will wait 60s for crictl version
	I1020 13:22:23.092691  489182 ssh_runner.go:195] Run: which crictl
	I1020 13:22:23.096251  489182 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:22:23.128772  489182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:22:23.128873  489182 ssh_runner.go:195] Run: crio --version
	I1020 13:22:23.158792  489182 ssh_runner.go:195] Run: crio --version
	I1020 13:22:23.193022  489182 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1020 13:22:20.508645  485872 node_ready.go:57] node "embed-certs-979197" has "Ready":"False" status (will retry)
	W1020 13:22:22.509916  485872 node_ready.go:57] node "embed-certs-979197" has "Ready":"False" status (will retry)
	I1020 13:22:23.015450  485872 node_ready.go:49] node "embed-certs-979197" is "Ready"
	I1020 13:22:23.015490  485872 node_ready.go:38] duration metric: took 11.509701244s for node "embed-certs-979197" to be "Ready" ...
	I1020 13:22:23.015504  485872 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:22:23.015567  485872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:22:23.038298  485872 api_server.go:72] duration metric: took 12.314795035s to wait for apiserver process to appear ...
	I1020 13:22:23.038326  485872 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:22:23.038348  485872 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:22:23.055322  485872 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:22:23.056752  485872 api_server.go:141] control plane version: v1.34.1
	I1020 13:22:23.056801  485872 api_server.go:131] duration metric: took 18.450587ms to wait for apiserver health ...
	I1020 13:22:23.056812  485872 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:22:23.065588  485872 system_pods.go:59] 8 kube-system pods found
	I1020 13:22:23.065630  485872 system_pods.go:61] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Pending
	I1020 13:22:23.065637  485872 system_pods.go:61] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running
	I1020 13:22:23.065641  485872 system_pods.go:61] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:22:23.065646  485872 system_pods.go:61] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running
	I1020 13:22:23.065653  485872 system_pods.go:61] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running
	I1020 13:22:23.065657  485872 system_pods.go:61] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:22:23.065661  485872 system_pods.go:61] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running
	I1020 13:22:23.065669  485872 system_pods.go:61] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:22:23.065675  485872 system_pods.go:74] duration metric: took 8.85774ms to wait for pod list to return data ...
	I1020 13:22:23.065689  485872 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:22:23.070573  485872 default_sa.go:45] found service account: "default"
	I1020 13:22:23.070620  485872 default_sa.go:55] duration metric: took 4.923905ms for default service account to be created ...
	I1020 13:22:23.070630  485872 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:22:23.073936  485872 system_pods.go:86] 8 kube-system pods found
	I1020 13:22:23.073971  485872 system_pods.go:89] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:22:23.073978  485872 system_pods.go:89] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running
	I1020 13:22:23.073984  485872 system_pods.go:89] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:22:23.073988  485872 system_pods.go:89] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running
	I1020 13:22:23.073993  485872 system_pods.go:89] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running
	I1020 13:22:23.073996  485872 system_pods.go:89] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:22:23.074001  485872 system_pods.go:89] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running
	I1020 13:22:23.074007  485872 system_pods.go:89] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:22:23.074033  485872 retry.go:31] will retry after 245.611822ms: missing components: kube-dns
	I1020 13:22:23.325603  485872 system_pods.go:86] 8 kube-system pods found
	I1020 13:22:23.325638  485872 system_pods.go:89] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:22:23.325645  485872 system_pods.go:89] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running
	I1020 13:22:23.325651  485872 system_pods.go:89] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:22:23.325655  485872 system_pods.go:89] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running
	I1020 13:22:23.325660  485872 system_pods.go:89] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running
	I1020 13:22:23.325664  485872 system_pods.go:89] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:22:23.325667  485872 system_pods.go:89] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running
	I1020 13:22:23.325674  485872 system_pods.go:89] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:22:23.325691  485872 retry.go:31] will retry after 236.65337ms: missing components: kube-dns
	I1020 13:22:23.606633  485872 system_pods.go:86] 8 kube-system pods found
	I1020 13:22:23.606668  485872 system_pods.go:89] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:22:23.606675  485872 system_pods.go:89] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running
	I1020 13:22:23.606681  485872 system_pods.go:89] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:22:23.606685  485872 system_pods.go:89] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running
	I1020 13:22:23.606689  485872 system_pods.go:89] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running
	I1020 13:22:23.606693  485872 system_pods.go:89] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:22:23.606697  485872 system_pods.go:89] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running
	I1020 13:22:23.606704  485872 system_pods.go:89] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:22:23.606720  485872 retry.go:31] will retry after 472.762903ms: missing components: kube-dns
	I1020 13:22:24.084620  485872 system_pods.go:86] 8 kube-system pods found
	I1020 13:22:24.084661  485872 system_pods.go:89] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:22:24.084672  485872 system_pods.go:89] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running
	I1020 13:22:24.084678  485872 system_pods.go:89] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:22:24.084682  485872 system_pods.go:89] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running
	I1020 13:22:24.084687  485872 system_pods.go:89] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running
	I1020 13:22:24.084691  485872 system_pods.go:89] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:22:24.084694  485872 system_pods.go:89] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running
	I1020 13:22:24.084698  485872 system_pods.go:89] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Running
	I1020 13:22:24.084706  485872 system_pods.go:126] duration metric: took 1.014070514s to wait for k8s-apps to be running ...
	I1020 13:22:24.084713  485872 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:22:24.084775  485872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:22:24.099591  485872 system_svc.go:56] duration metric: took 14.86796ms WaitForService to wait for kubelet
	I1020 13:22:24.099618  485872 kubeadm.go:586] duration metric: took 13.376121321s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:22:24.099648  485872 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:22:24.102927  485872 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:22:24.102965  485872 node_conditions.go:123] node cpu capacity is 2
	I1020 13:22:24.102978  485872 node_conditions.go:105] duration metric: took 3.324457ms to run NodePressure ...
	I1020 13:22:24.102991  485872 start.go:241] waiting for startup goroutines ...
	I1020 13:22:24.102998  485872 start.go:246] waiting for cluster config update ...
	I1020 13:22:24.103039  485872 start.go:255] writing updated cluster config ...
	I1020 13:22:24.103348  485872 ssh_runner.go:195] Run: rm -f paused
	I1020 13:22:24.109930  485872 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:22:24.113940  485872 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9hxmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:23.196089  489182 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-794175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:22:23.222267  489182 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:22:23.226459  489182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:22:23.237572  489182 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-794175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-794175 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:22:23.237700  489182 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:22:23.237754  489182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:22:23.275600  489182 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:22:23.275625  489182 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:22:23.275715  489182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:22:23.312938  489182 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:22:23.312958  489182 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:22:23.312965  489182 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1020 13:22:23.313063  489182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-794175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-794175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:22:23.313143  489182 ssh_runner.go:195] Run: crio config
	I1020 13:22:23.388807  489182 cni.go:84] Creating CNI manager for ""
	I1020 13:22:23.388832  489182 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:22:23.388854  489182 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:22:23.388907  489182 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-794175 NodeName:default-k8s-diff-port-794175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:22:23.389081  489182 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-794175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:22:23.389172  489182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:22:23.399680  489182 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:22:23.399787  489182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:22:23.409084  489182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1020 13:22:23.427221  489182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:22:23.449734  489182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1020 13:22:23.471357  489182 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:22:23.479290  489182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:22:23.498545  489182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:22:23.665677  489182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:22:23.686325  489182 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175 for IP: 192.168.76.2
	I1020 13:22:23.686344  489182 certs.go:195] generating shared ca certs ...
	I1020 13:22:23.686362  489182 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:22:23.686501  489182 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:22:23.686547  489182 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:22:23.686557  489182 certs.go:257] generating profile certs ...
	I1020 13:22:23.686659  489182 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.key
	I1020 13:22:23.686728  489182 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/apiserver.key.6332dc82
	I1020 13:22:23.686772  489182 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/proxy-client.key
	I1020 13:22:23.686892  489182 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:22:23.686923  489182 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:22:23.686937  489182 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:22:23.686963  489182 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:22:23.686989  489182 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:22:23.687020  489182 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:22:23.687065  489182 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:22:23.687673  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:22:23.724615  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:22:23.752104  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:22:23.785312  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:22:23.822508  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1020 13:22:23.849095  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:22:23.872152  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:22:23.900568  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:22:23.924485  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:22:23.946619  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:22:23.973054  489182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:22:23.992208  489182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:22:24.011634  489182 ssh_runner.go:195] Run: openssl version
	I1020 13:22:24.019794  489182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:22:24.029192  489182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:22:24.033499  489182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:22:24.033597  489182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:22:24.079590  489182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:22:24.093016  489182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:22:24.105751  489182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:22:24.110491  489182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:22:24.110582  489182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:22:24.157711  489182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:22:24.166450  489182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:22:24.175021  489182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:22:24.179502  489182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:22:24.179607  489182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:22:24.231724  489182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:22:24.240032  489182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:22:24.243821  489182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:22:24.286977  489182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:22:24.329257  489182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:22:24.377197  489182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:22:24.437409  489182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:22:24.494859  489182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:22:24.572339  489182 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-794175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-794175 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:22:24.572565  489182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:22:24.572673  489182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:22:24.633603  489182 cri.go:89] found id: "a1f57d1b86d10e16e97306a3d10e424a14e07532b8216a6771718f9c926ae56d"
	I1020 13:22:24.633678  489182 cri.go:89] found id: "9d5c53a7bdae3f025044a87f8c5d2e1b320b8ceedb2b698caa614049aa2ebc06"
	I1020 13:22:24.633706  489182 cri.go:89] found id: "096f1cd30b37ce6efa7756c97e11d57278a6e55b13f1e328c2db6254d6777462"
	I1020 13:22:24.633725  489182 cri.go:89] found id: "56b7c71f81efc16edacd521e6aae411626e76d228a65e9add6a6a338fa9c8438"
	I1020 13:22:24.633760  489182 cri.go:89] found id: ""
	I1020 13:22:24.633853  489182 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:22:24.655068  489182 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:22:24Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:22:24.655194  489182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:22:24.669393  489182 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:22:24.669458  489182 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:22:24.669543  489182 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:22:24.682993  489182 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:22:24.684013  489182 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-794175" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:22:24.684758  489182 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-794175" cluster setting kubeconfig missing "default-k8s-diff-port-794175" context setting]
	I1020 13:22:24.685751  489182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:22:24.687771  489182 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:22:24.697280  489182 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1020 13:22:24.697315  489182 kubeadm.go:601] duration metric: took 27.83846ms to restartPrimaryControlPlane
	I1020 13:22:24.697325  489182 kubeadm.go:402] duration metric: took 124.996864ms to StartCluster
	I1020 13:22:24.697365  489182 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:22:24.697448  489182 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:22:24.699037  489182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:22:24.699478  489182 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:22:24.699901  489182 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:22:24.699974  489182 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-794175"
	I1020 13:22:24.699988  489182 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-794175"
	W1020 13:22:24.699994  489182 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:22:24.700015  489182 host.go:66] Checking if "default-k8s-diff-port-794175" exists ...
	I1020 13:22:24.700566  489182 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-794175 --format={{.State.Status}}
	I1020 13:22:24.700898  489182 config.go:182] Loaded profile config "default-k8s-diff-port-794175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:22:24.700985  489182 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-794175"
	I1020 13:22:24.701026  489182 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-794175"
	W1020 13:22:24.701052  489182 addons.go:247] addon dashboard should already be in state true
	I1020 13:22:24.701102  489182 host.go:66] Checking if "default-k8s-diff-port-794175" exists ...
	I1020 13:22:24.701575  489182 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-794175 --format={{.State.Status}}
	I1020 13:22:24.712309  489182 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-794175"
	I1020 13:22:24.712476  489182 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-794175"
	I1020 13:22:24.712839  489182 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-794175 --format={{.State.Status}}
	I1020 13:22:24.720572  489182 out.go:179] * Verifying Kubernetes components...
	I1020 13:22:24.723763  489182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:22:24.755405  489182 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:22:24.756424  489182 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:22:24.760748  489182 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 13:22:24.643727  485872 pod_ready.go:94] pod "coredns-66bc5c9577-9hxmm" is "Ready"
	I1020 13:22:24.643761  485872 pod_ready.go:86] duration metric: took 529.799317ms for pod "coredns-66bc5c9577-9hxmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:24.651495  485872 pod_ready.go:83] waiting for pod "etcd-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:24.668004  485872 pod_ready.go:94] pod "etcd-embed-certs-979197" is "Ready"
	I1020 13:22:24.668032  485872 pod_ready.go:86] duration metric: took 16.506718ms for pod "etcd-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:24.672128  485872 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:24.680819  485872 pod_ready.go:94] pod "kube-apiserver-embed-certs-979197" is "Ready"
	I1020 13:22:24.680896  485872 pod_ready.go:86] duration metric: took 8.739084ms for pod "kube-apiserver-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:24.683743  485872 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:24.914443  485872 pod_ready.go:94] pod "kube-controller-manager-embed-certs-979197" is "Ready"
	I1020 13:22:24.914476  485872 pod_ready.go:86] duration metric: took 230.670178ms for pod "kube-controller-manager-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:25.114809  485872 pod_ready.go:83] waiting for pod "kube-proxy-gf2bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:25.514863  485872 pod_ready.go:94] pod "kube-proxy-gf2bz" is "Ready"
	I1020 13:22:25.514889  485872 pod_ready.go:86] duration metric: took 400.050531ms for pod "kube-proxy-gf2bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:25.715609  485872 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:26.114732  485872 pod_ready.go:94] pod "kube-scheduler-embed-certs-979197" is "Ready"
	I1020 13:22:26.114810  485872 pod_ready.go:86] duration metric: took 399.17477ms for pod "kube-scheduler-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:22:26.114837  485872 pod_ready.go:40] duration metric: took 2.00487554s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:22:26.224016  485872 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:22:26.227049  485872 out.go:179] * Done! kubectl is now configured to use "embed-certs-979197" cluster and "default" namespace by default
	I1020 13:22:24.760865  489182 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:22:24.760882  489182 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:22:24.760945  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:24.763684  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:22:24.763711  489182 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:22:24.763775  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:24.772025  489182 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-794175"
	W1020 13:22:24.772049  489182 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:22:24.772072  489182 host.go:66] Checking if "default-k8s-diff-port-794175" exists ...
	I1020 13:22:24.772609  489182 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-794175 --format={{.State.Status}}
	I1020 13:22:24.820558  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:24.821153  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:24.831428  489182 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:22:24.831457  489182 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:22:24.831517  489182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:22:24.857943  489182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:22:25.054889  489182 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:22:25.089257  489182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:22:25.101308  489182 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:22:25.214273  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:22:25.214294  489182 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:22:25.277549  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:22:25.277569  489182 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:22:25.308720  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:22:25.308740  489182 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:22:25.351404  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:22:25.351484  489182 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:22:25.389076  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:22:25.389148  489182 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:22:25.410146  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:22:25.410218  489182 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:22:25.434007  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:22:25.434071  489182 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:22:25.458157  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:22:25.458229  489182 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:22:25.479142  489182 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:22:25.479215  489182 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:22:25.501754  489182 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:22:31.086082  489182 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.03111181s)
	I1020 13:22:31.086155  489182 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.996825102s)
	I1020 13:22:31.086187  489182 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-794175" to be "Ready" ...
	I1020 13:22:31.086589  489182 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.985215104s)
	I1020 13:22:31.086929  489182 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.585055747s)
	I1020 13:22:31.090059  489182 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-794175 addons enable metrics-server
	
	I1020 13:22:31.121569  489182 node_ready.go:49] node "default-k8s-diff-port-794175" is "Ready"
	I1020 13:22:31.121600  489182 node_ready.go:38] duration metric: took 35.392019ms for node "default-k8s-diff-port-794175" to be "Ready" ...
	I1020 13:22:31.121615  489182 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:22:31.121675  489182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:22:31.139612  489182 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1020 13:22:31.140333  489182 api_server.go:72] duration metric: took 6.440810311s to wait for apiserver process to appear ...
	I1020 13:22:31.140409  489182 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:22:31.140446  489182 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1020 13:22:31.142461  489182 addons.go:514] duration metric: took 6.442555679s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 13:22:31.151966  489182 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1020 13:22:31.153446  489182 api_server.go:141] control plane version: v1.34.1
	I1020 13:22:31.153482  489182 api_server.go:131] duration metric: took 13.050837ms to wait for apiserver health ...
	I1020 13:22:31.153492  489182 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:22:31.158664  489182 system_pods.go:59] 8 kube-system pods found
	I1020 13:22:31.158708  489182 system_pods.go:61] "coredns-66bc5c9577-fgxwg" [aad94486-511b-4b40-bb0a-3062658223f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:22:31.158721  489182 system_pods.go:61] "etcd-default-k8s-diff-port-794175" [c00388a1-d23a-4d95-a1a5-26ed572a9b74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:22:31.158759  489182 system_pods.go:61] "kindnet-9w4q8" [1c5ecf5e-1060-4862-bfa6-2ae908741f24] Running
	I1020 13:22:31.158769  489182 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-794175" [2b42d7b6-7d4b-420b-baea-1c5475697fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:22:31.158785  489182 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-794175" [62b31ac2-6539-4cd5-aba2-5d5be72da8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:22:31.158790  489182 system_pods.go:61] "kube-proxy-jkb75" [6bc104b2-0343-49fc-9c3f-d45e5647f138] Running
	I1020 13:22:31.158799  489182 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-794175" [4ea78e53-7f7d-4aa2-9821-a60e1e38916c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:22:31.158820  489182 system_pods.go:61] "storage-provisioner" [e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6] Running
	I1020 13:22:31.158833  489182 system_pods.go:74] duration metric: took 5.315515ms to wait for pod list to return data ...
	I1020 13:22:31.158842  489182 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:22:31.162173  489182 default_sa.go:45] found service account: "default"
	I1020 13:22:31.162202  489182 default_sa.go:55] duration metric: took 3.348212ms for default service account to be created ...
	I1020 13:22:31.162213  489182 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:22:31.166398  489182 system_pods.go:86] 8 kube-system pods found
	I1020 13:22:31.166437  489182 system_pods.go:89] "coredns-66bc5c9577-fgxwg" [aad94486-511b-4b40-bb0a-3062658223f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:22:31.166447  489182 system_pods.go:89] "etcd-default-k8s-diff-port-794175" [c00388a1-d23a-4d95-a1a5-26ed572a9b74] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:22:31.166452  489182 system_pods.go:89] "kindnet-9w4q8" [1c5ecf5e-1060-4862-bfa6-2ae908741f24] Running
	I1020 13:22:31.166484  489182 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-794175" [2b42d7b6-7d4b-420b-baea-1c5475697fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:22:31.166498  489182 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-794175" [62b31ac2-6539-4cd5-aba2-5d5be72da8b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:22:31.166505  489182 system_pods.go:89] "kube-proxy-jkb75" [6bc104b2-0343-49fc-9c3f-d45e5647f138] Running
	I1020 13:22:31.166517  489182 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-794175" [4ea78e53-7f7d-4aa2-9821-a60e1e38916c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:22:31.166521  489182 system_pods.go:89] "storage-provisioner" [e9af3162-6ec7-4df2-9028-ac5d5aa7c6d6] Running
	I1020 13:22:31.166529  489182 system_pods.go:126] duration metric: took 4.31021ms to wait for k8s-apps to be running ...
	I1020 13:22:31.166541  489182 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:22:31.166631  489182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:22:31.183123  489182 system_svc.go:56] duration metric: took 16.572196ms WaitForService to wait for kubelet
	I1020 13:22:31.183205  489182 kubeadm.go:586] duration metric: took 6.483684832s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:22:31.183240  489182 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:22:31.186255  489182 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:22:31.186334  489182 node_conditions.go:123] node cpu capacity is 2
	I1020 13:22:31.186360  489182 node_conditions.go:105] duration metric: took 3.083544ms to run NodePressure ...
	I1020 13:22:31.186387  489182 start.go:241] waiting for startup goroutines ...
	I1020 13:22:31.186421  489182 start.go:246] waiting for cluster config update ...
	I1020 13:22:31.186448  489182 start.go:255] writing updated cluster config ...
	I1020 13:22:31.186871  489182 ssh_runner.go:195] Run: rm -f paused
	I1020 13:22:31.190827  489182 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:22:31.258050  489182 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fgxwg" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 13:22:33.264496  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:22:35.765137  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 20 13:22:23 embed-certs-979197 crio[842]: time="2025-10-20T13:22:23.46173992Z" level=info msg="Created container f92ca6bf8c9806ee5fe5fdc772bb2226309328adb957e69d5d66bdcc888a1308: kube-system/coredns-66bc5c9577-9hxmm/coredns" id=a6ff5b52-aedf-454e-8886-7f622b9314f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:22:23 embed-certs-979197 crio[842]: time="2025-10-20T13:22:23.462728651Z" level=info msg="Starting container: f92ca6bf8c9806ee5fe5fdc772bb2226309328adb957e69d5d66bdcc888a1308" id=5a33010a-3c07-40c7-aa0f-601767d1c818 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:22:23 embed-certs-979197 crio[842]: time="2025-10-20T13:22:23.475444206Z" level=info msg="Started container" PID=1731 containerID=f92ca6bf8c9806ee5fe5fdc772bb2226309328adb957e69d5d66bdcc888a1308 description=kube-system/coredns-66bc5c9577-9hxmm/coredns id=5a33010a-3c07-40c7-aa0f-601767d1c818 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8ea0c915e496b97d8d90d36ea121265e7685a4c9387b7f44b68da278ee288b5
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.854749018Z" level=info msg="Running pod sandbox: default/busybox/POD" id=31035c4d-ac3f-4640-aa92-8e75344be498 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.854815283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.875446264Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:712b090099f659a7180557b8e296fe09fd25f08e9fc5019fa5ecd8f52c327271 UID:50db164b-1b33-4592-8bf8-53911486ce65 NetNS:/var/run/netns/7835f809-8a14-4ce8-8c8b-18c11a0607b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d660}] Aliases:map[]}"
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.876578283Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.895706476Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:712b090099f659a7180557b8e296fe09fd25f08e9fc5019fa5ecd8f52c327271 UID:50db164b-1b33-4592-8bf8-53911486ce65 NetNS:/var/run/netns/7835f809-8a14-4ce8-8c8b-18c11a0607b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d660}] Aliases:map[]}"
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.89601016Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.90098981Z" level=info msg="Ran pod sandbox 712b090099f659a7180557b8e296fe09fd25f08e9fc5019fa5ecd8f52c327271 with infra container: default/busybox/POD" id=31035c4d-ac3f-4640-aa92-8e75344be498 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.902243406Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=595c4c68-7097-47a3-8dce-b250930283cc name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.902453394Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=595c4c68-7097-47a3-8dce-b250930283cc name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.902553523Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=595c4c68-7097-47a3-8dce-b250930283cc name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.909348467Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8f0ea2a4-d406-4f2e-8336-e6ec91ab9e16 name=/runtime.v1.ImageService/PullImage
	Oct 20 13:22:26 embed-certs-979197 crio[842]: time="2025-10-20T13:22:26.911269772Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.022043844Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8f0ea2a4-d406-4f2e-8336-e6ec91ab9e16 name=/runtime.v1.ImageService/PullImage
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.023210489Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8bc4a35d-430e-4d15-8e7e-7f8a4af51abd name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.024987948Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eb090ad7-84e3-427c-9526-44330e6037a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.0303967Z" level=info msg="Creating container: default/busybox/busybox" id=8d71dc9a-dca8-49bd-9a9f-40666a8b951b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.030707965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.045581965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.046199236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.069428553Z" level=info msg="Created container b38e0326528c7a4d955688a2a14483fdb6d1701496f0e56ed6e81857b169fca7: default/busybox/busybox" id=8d71dc9a-dca8-49bd-9a9f-40666a8b951b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.073131888Z" level=info msg="Starting container: b38e0326528c7a4d955688a2a14483fdb6d1701496f0e56ed6e81857b169fca7" id=1b968212-5fe0-424d-b6d2-ebb0532a6192 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:22:29 embed-certs-979197 crio[842]: time="2025-10-20T13:22:29.078193492Z" level=info msg="Started container" PID=1784 containerID=b38e0326528c7a4d955688a2a14483fdb6d1701496f0e56ed6e81857b169fca7 description=default/busybox/busybox id=1b968212-5fe0-424d-b6d2-ebb0532a6192 name=/runtime.v1.RuntimeService/StartContainer sandboxID=712b090099f659a7180557b8e296fe09fd25f08e9fc5019fa5ecd8f52c327271
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	b38e0326528c7       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   712b090099f65       busybox                                      default
	f92ca6bf8c980       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago      Running             coredns                   0                   d8ea0c915e496       coredns-66bc5c9577-9hxmm                     kube-system
	5caebe13bf3ac       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago      Running             storage-provisioner       0                   413a1fb862926       storage-provisioner                          kube-system
	cc441366b60d7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      26 seconds ago      Running             kindnet-cni               0                   efe636a148607       kindnet-jzxdn                                kube-system
	6d18cc2aa7fe4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      26 seconds ago      Running             kube-proxy                0                   2df9236514e92       kube-proxy-gf2bz                             kube-system
	ec58a0f7e3544       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      41 seconds ago      Running             etcd                      0                   b937f46f57003       etcd-embed-certs-979197                      kube-system
	f7fea9521607f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      41 seconds ago      Running             kube-controller-manager   0                   65d853281367d       kube-controller-manager-embed-certs-979197   kube-system
	811df08455506       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      41 seconds ago      Running             kube-scheduler            0                   2665df555ce27       kube-scheduler-embed-certs-979197            kube-system
	0f8575d85f3a2       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      41 seconds ago      Running             kube-apiserver            0                   766527d7d4022       kube-apiserver-embed-certs-979197            kube-system
	
	
	==> coredns [f92ca6bf8c9806ee5fe5fdc772bb2226309328adb957e69d5d66bdcc888a1308] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] 127.0.0.1:39076 - 25566 "HINFO IN 6403456160968793799.7390742545295582932. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012726074s
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               embed-certs-979197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-979197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=embed-certs-979197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:22:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-979197
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:22:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:22:37 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:22:37 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:22:37 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:22:37 +0000   Mon, 20 Oct 2025 13:22:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-979197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                746efe57-6e86-4a6f-8038-c5a3b70dbd80
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-9hxmm                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-embed-certs-979197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-jzxdn                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-979197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-979197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-gf2bz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-979197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   NodeHasSufficientMemory  41s (x8 over 42s)  kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    41s (x8 over 42s)  kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 42s)  kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node embed-certs-979197 event: Registered Node embed-certs-979197 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-979197 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct20 12:58] overlayfs: idmapped layers are currently not supported
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ec58a0f7e3544c1706bd87a8a61cb04543f86f7783e7bdb397b567d67e4c7694] <==
	{"level":"warn","ts":"2025-10-20T13:22:01.474249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.513025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.538759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.604652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.642016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.669511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.702132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.742524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.758524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.790629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.821482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.851677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.888017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.924893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.941613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.970587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:01.990340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.014810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.064485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.082109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.110398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.141834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.155713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.204232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:02.385273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53466","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:22:38 up  3:05,  0 user,  load average: 2.56, 2.64, 2.46
	Linux embed-certs-979197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cc441366b60d798f7018a3b93a2bf416477a1de4da3296f9eda338bf1e09ee05] <==
	I1020 13:22:12.609926       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:22:12.610301       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 13:22:12.610483       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:22:12.610534       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:22:12.610569       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:22:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:22:12.817703       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:22:12.817775       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:22:12.817808       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:22:12.818779       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 13:22:13.100735       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:22:13.100948       1 metrics.go:72] Registering metrics
	I1020 13:22:13.101054       1 controller.go:711] "Syncing nftables rules"
	I1020 13:22:22.820500       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 13:22:22.820548       1 main.go:301] handling current node
	I1020 13:22:32.817990       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 13:22:32.818025       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f8575d85f3a227b8c9bf8ec2ec14aaa828e9e3a06ca379bdfbdd900ff4e8a7d] <==
	I1020 13:22:03.890072       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:22:03.890079       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:22:03.909631       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:22:03.929177       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:22:03.936412       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 13:22:03.984197       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:22:03.984290       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:22:04.549093       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 13:22:04.557806       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 13:22:04.557827       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:22:05.384331       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:22:05.469657       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:22:05.587115       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:22:05.654382       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 13:22:05.680194       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1020 13:22:05.684498       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:22:05.694637       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:22:06.481205       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:22:06.496794       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 13:22:06.510236       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 13:22:11.253623       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1020 13:22:11.352065       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:22:11.455052       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:22:11.513312       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1020 13:22:36.734471       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:43518: use of closed network connection
	
	
	==> kube-controller-manager [f7fea9521607f2a1ce461c038a7f5f29a4ea184dd8876a30837efb84cfb848bf] <==
	I1020 13:22:10.631591       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:22:10.631672       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:22:10.631683       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:22:10.631755       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 13:22:10.631838       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:22:10.632138       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 13:22:10.632252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:22:10.633538       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:22:10.634749       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 13:22:10.635221       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 13:22:10.636933       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:22:10.637020       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:22:10.637996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:22:10.637954       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:22:10.638062       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:22:10.638076       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 13:22:10.639036       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:22:10.641972       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 13:22:10.642132       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1020 13:22:10.642324       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 13:22:10.644439       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-979197" podCIDRs=["10.244.0.0/24"]
	I1020 13:22:10.650949       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:22:10.653885       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 13:22:10.665538       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1020 13:22:25.587618       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6d18cc2aa7fe492ccb534459d4260803f81966ea96cf504f92d8718379a9d6e4] <==
	I1020 13:22:12.361396       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:22:12.441927       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:22:12.542025       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:22:12.542062       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 13:22:12.542125       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:22:12.598300       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:22:12.598403       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:22:12.610971       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:22:12.611409       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:22:12.611478       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:22:12.612976       1 config.go:200] "Starting service config controller"
	I1020 13:22:12.613045       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:22:12.613087       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:22:12.613130       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:22:12.613183       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:22:12.613241       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:22:12.615363       1 config.go:309] "Starting node config controller"
	I1020 13:22:12.615464       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:22:12.615515       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:22:12.713159       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:22:12.713345       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 13:22:12.713350       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [811df08455506c500a129e3d6db5501c4f2594986b571d5cb5fbc9af179706e3] <==
	I1020 13:22:04.394263       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:22:04.397464       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:22:04.397575       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:22:04.398592       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:22:04.398750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1020 13:22:04.399055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1020 13:22:04.411618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 13:22:04.411881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 13:22:04.411969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 13:22:04.412022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 13:22:04.412091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 13:22:04.412151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 13:22:04.412159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 13:22:04.412226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 13:22:04.412332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 13:22:04.412425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 13:22:04.412507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 13:22:04.412577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 13:22:04.412649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 13:22:04.412723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 13:22:04.412828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 13:22:04.412964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 13:22:04.413278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 13:22:04.414590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1020 13:22:05.998306       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: I1020 13:22:11.398883    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84729d76-950b-4e09-a264-1b61ffedaac7-lib-modules\") pod \"kindnet-jzxdn\" (UID: \"84729d76-950b-4e09-a264-1b61ffedaac7\") " pod="kube-system/kindnet-jzxdn"
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: I1020 13:22:11.398941    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d204f6c2-319e-4a08-96ad-a9e789c40df8-kube-proxy\") pod \"kube-proxy-gf2bz\" (UID: \"d204f6c2-319e-4a08-96ad-a9e789c40df8\") " pod="kube-system/kube-proxy-gf2bz"
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: I1020 13:22:11.398966    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/84729d76-950b-4e09-a264-1b61ffedaac7-cni-cfg\") pod \"kindnet-jzxdn\" (UID: \"84729d76-950b-4e09-a264-1b61ffedaac7\") " pod="kube-system/kindnet-jzxdn"
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: I1020 13:22:11.398998    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d204f6c2-319e-4a08-96ad-a9e789c40df8-xtables-lock\") pod \"kube-proxy-gf2bz\" (UID: \"d204f6c2-319e-4a08-96ad-a9e789c40df8\") " pod="kube-system/kube-proxy-gf2bz"
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: I1020 13:22:11.399022    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d204f6c2-319e-4a08-96ad-a9e789c40df8-lib-modules\") pod \"kube-proxy-gf2bz\" (UID: \"d204f6c2-319e-4a08-96ad-a9e789c40df8\") " pod="kube-system/kube-proxy-gf2bz"
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: E1020 13:22:11.604605    1315 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: E1020 13:22:11.604682    1315 projected.go:196] Error preparing data for projected volume kube-api-access-28lxf for pod kube-system/kube-proxy-gf2bz: configmap "kube-root-ca.crt" not found
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: E1020 13:22:11.604795    1315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d204f6c2-319e-4a08-96ad-a9e789c40df8-kube-api-access-28lxf podName:d204f6c2-319e-4a08-96ad-a9e789c40df8 nodeName:}" failed. No retries permitted until 2025-10-20 13:22:12.104767754 +0000 UTC m=+5.797152756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28lxf" (UniqueName: "kubernetes.io/projected/d204f6c2-319e-4a08-96ad-a9e789c40df8-kube-api-access-28lxf") pod "kube-proxy-gf2bz" (UID: "d204f6c2-319e-4a08-96ad-a9e789c40df8") : configmap "kube-root-ca.crt" not found
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: E1020 13:22:11.621271    1315 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: E1020 13:22:11.621321    1315 projected.go:196] Error preparing data for projected volume kube-api-access-qg7sd for pod kube-system/kindnet-jzxdn: configmap "kube-root-ca.crt" not found
	Oct 20 13:22:11 embed-certs-979197 kubelet[1315]: E1020 13:22:11.621690    1315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/84729d76-950b-4e09-a264-1b61ffedaac7-kube-api-access-qg7sd podName:84729d76-950b-4e09-a264-1b61ffedaac7 nodeName:}" failed. No retries permitted until 2025-10-20 13:22:12.121396885 +0000 UTC m=+5.813781887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qg7sd" (UniqueName: "kubernetes.io/projected/84729d76-950b-4e09-a264-1b61ffedaac7-kube-api-access-qg7sd") pod "kindnet-jzxdn" (UID: "84729d76-950b-4e09-a264-1b61ffedaac7") : configmap "kube-root-ca.crt" not found
	Oct 20 13:22:12 embed-certs-979197 kubelet[1315]: I1020 13:22:12.115315    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:22:12 embed-certs-979197 kubelet[1315]: W1020 13:22:12.526259    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/crio-efe636a148607a5485d34c0617dcda2ea135bbdfdac70470e8eb0972c097ae7b WatchSource:0}: Error finding container efe636a148607a5485d34c0617dcda2ea135bbdfdac70470e8eb0972c097ae7b: Status 404 returned error can't find the container with id efe636a148607a5485d34c0617dcda2ea135bbdfdac70470e8eb0972c097ae7b
	Oct 20 13:22:13 embed-certs-979197 kubelet[1315]: I1020 13:22:13.525230    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gf2bz" podStartSLOduration=2.525196608 podStartE2EDuration="2.525196608s" podCreationTimestamp="2025-10-20 13:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:22:12.564891471 +0000 UTC m=+6.257276481" watchObservedRunningTime="2025-10-20 13:22:13.525196608 +0000 UTC m=+7.217581610"
	Oct 20 13:22:13 embed-certs-979197 kubelet[1315]: I1020 13:22:13.575589    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jzxdn" podStartSLOduration=2.575560145 podStartE2EDuration="2.575560145s" podCreationTimestamp="2025-10-20 13:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:22:13.560719254 +0000 UTC m=+7.253104264" watchObservedRunningTime="2025-10-20 13:22:13.575560145 +0000 UTC m=+7.267945146"
	Oct 20 13:22:22 embed-certs-979197 kubelet[1315]: I1020 13:22:22.954711    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 20 13:22:23 embed-certs-979197 kubelet[1315]: I1020 13:22:23.088731    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9d863c1-b71a-470d-90fd-47fa59ace32e-config-volume\") pod \"coredns-66bc5c9577-9hxmm\" (UID: \"b9d863c1-b71a-470d-90fd-47fa59ace32e\") " pod="kube-system/coredns-66bc5c9577-9hxmm"
	Oct 20 13:22:23 embed-certs-979197 kubelet[1315]: I1020 13:22:23.088924    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfc2n\" (UniqueName: \"kubernetes.io/projected/8b66a916-769c-48f7-a28b-948022299e8e-kube-api-access-gfc2n\") pod \"storage-provisioner\" (UID: \"8b66a916-769c-48f7-a28b-948022299e8e\") " pod="kube-system/storage-provisioner"
	Oct 20 13:22:23 embed-certs-979197 kubelet[1315]: I1020 13:22:23.089037    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbx2\" (UniqueName: \"kubernetes.io/projected/b9d863c1-b71a-470d-90fd-47fa59ace32e-kube-api-access-mhbx2\") pod \"coredns-66bc5c9577-9hxmm\" (UID: \"b9d863c1-b71a-470d-90fd-47fa59ace32e\") " pod="kube-system/coredns-66bc5c9577-9hxmm"
	Oct 20 13:22:23 embed-certs-979197 kubelet[1315]: I1020 13:22:23.089126    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8b66a916-769c-48f7-a28b-948022299e8e-tmp\") pod \"storage-provisioner\" (UID: \"8b66a916-769c-48f7-a28b-948022299e8e\") " pod="kube-system/storage-provisioner"
	Oct 20 13:22:23 embed-certs-979197 kubelet[1315]: W1020 13:22:23.396773    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/crio-d8ea0c915e496b97d8d90d36ea121265e7685a4c9387b7f44b68da278ee288b5 WatchSource:0}: Error finding container d8ea0c915e496b97d8d90d36ea121265e7685a4c9387b7f44b68da278ee288b5: Status 404 returned error can't find the container with id d8ea0c915e496b97d8d90d36ea121265e7685a4c9387b7f44b68da278ee288b5
	Oct 20 13:22:23 embed-certs-979197 kubelet[1315]: I1020 13:22:23.682539    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9hxmm" podStartSLOduration=12.682522026000001 podStartE2EDuration="12.682522026s" podCreationTimestamp="2025-10-20 13:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:22:23.63851954 +0000 UTC m=+17.330904550" watchObservedRunningTime="2025-10-20 13:22:23.682522026 +0000 UTC m=+17.374907028"
	Oct 20 13:22:24 embed-certs-979197 kubelet[1315]: I1020 13:22:24.597001    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.596983964 podStartE2EDuration="13.596983964s" podCreationTimestamp="2025-10-20 13:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:22:23.690957394 +0000 UTC m=+17.383342404" watchObservedRunningTime="2025-10-20 13:22:24.596983964 +0000 UTC m=+18.289368974"
	Oct 20 13:22:26 embed-certs-979197 kubelet[1315]: I1020 13:22:26.618191    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6ck\" (UniqueName: \"kubernetes.io/projected/50db164b-1b33-4592-8bf8-53911486ce65-kube-api-access-cq6ck\") pod \"busybox\" (UID: \"50db164b-1b33-4592-8bf8-53911486ce65\") " pod="default/busybox"
	Oct 20 13:22:26 embed-certs-979197 kubelet[1315]: W1020 13:22:26.900313    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/crio-712b090099f659a7180557b8e296fe09fd25f08e9fc5019fa5ecd8f52c327271 WatchSource:0}: Error finding container 712b090099f659a7180557b8e296fe09fd25f08e9fc5019fa5ecd8f52c327271: Status 404 returned error can't find the container with id 712b090099f659a7180557b8e296fe09fd25f08e9fc5019fa5ecd8f52c327271
	
	
	==> storage-provisioner [5caebe13bf3ac0b89ff209438e9ffd4436f5626f7de273487a3fa7321763b4b0] <==
	I1020 13:22:23.440632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:22:23.646198       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:22:23.646280       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:22:23.740424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:23.769852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:22:23.770470       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:22:23.770873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"390cf0fd-e9c8-4ac9-a37f-95614000b7ae", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-979197_c495961d-a289-4233-af3a-9e4b8dc7c863 became leader
	I1020 13:22:23.770937       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-979197_c495961d-a289-4233-af3a-9e4b8dc7c863!
	W1020 13:22:23.776213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:23.801655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:22:23.872601       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-979197_c495961d-a289-4233-af3a-9e4b8dc7c863!
	W1020 13:22:25.812836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:25.817756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:27.821047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:27.828824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:29.832000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:29.837959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:31.841453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:31.848299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:33.852162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:33.857001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:35.859909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:35.869256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:37.873828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:22:37.889606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-979197 -n embed-certs-979197
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-979197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-794175 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-794175 --alsologtostderr -v=1: exit status 80 (1.844313579s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-794175 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:23:24.731290  494322 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:23:24.731486  494322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:23:24.731519  494322 out.go:374] Setting ErrFile to fd 2...
	I1020 13:23:24.731539  494322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:23:24.731797  494322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:23:24.732083  494322 out.go:368] Setting JSON to false
	I1020 13:23:24.732136  494322 mustload.go:65] Loading cluster: default-k8s-diff-port-794175
	I1020 13:23:24.738878  494322 config.go:182] Loaded profile config "default-k8s-diff-port-794175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:23:24.739455  494322 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-794175 --format={{.State.Status}}
	I1020 13:23:24.756327  494322 host.go:66] Checking if "default-k8s-diff-port-794175" exists ...
	I1020 13:23:24.756793  494322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:23:24.850646  494322 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-20 13:23:24.841231644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:23:24.851280  494322 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-794175 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 13:23:24.854838  494322 out.go:179] * Pausing node default-k8s-diff-port-794175 ... 
	I1020 13:23:24.857735  494322 host.go:66] Checking if "default-k8s-diff-port-794175" exists ...
	I1020 13:23:24.858067  494322 ssh_runner.go:195] Run: systemctl --version
	I1020 13:23:24.858119  494322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-794175
	I1020 13:23:24.875413  494322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/default-k8s-diff-port-794175/id_rsa Username:docker}
	I1020 13:23:24.979648  494322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:23:25.010219  494322 pause.go:52] kubelet running: true
	I1020 13:23:25.010309  494322 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:23:25.271723  494322 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:23:25.271833  494322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:23:25.341502  494322 cri.go:89] found id: "863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d"
	I1020 13:23:25.341524  494322 cri.go:89] found id: "295969f69655a7f6680e1af4de2531d515ee76538364168b72643bc1c0e2555c"
	I1020 13:23:25.341530  494322 cri.go:89] found id: "6c168c94f962c234b865074198edbc08aad0b0769ae1e96069c37ca5c002d8fe"
	I1020 13:23:25.341533  494322 cri.go:89] found id: "c5b36c984daad909967c8cc642e55e8e4193c1aa95b8708ae59d7ddad6f2d075"
	I1020 13:23:25.341537  494322 cri.go:89] found id: "c2da31ffb29883c5a509e1c067e67e14ad811168ec5c9dcca77b3fd063fead17"
	I1020 13:23:25.341540  494322 cri.go:89] found id: "a1f57d1b86d10e16e97306a3d10e424a14e07532b8216a6771718f9c926ae56d"
	I1020 13:23:25.341543  494322 cri.go:89] found id: "9d5c53a7bdae3f025044a87f8c5d2e1b320b8ceedb2b698caa614049aa2ebc06"
	I1020 13:23:25.341546  494322 cri.go:89] found id: "096f1cd30b37ce6efa7756c97e11d57278a6e55b13f1e328c2db6254d6777462"
	I1020 13:23:25.341550  494322 cri.go:89] found id: "56b7c71f81efc16edacd521e6aae411626e76d228a65e9add6a6a338fa9c8438"
	I1020 13:23:25.341580  494322 cri.go:89] found id: "025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	I1020 13:23:25.341589  494322 cri.go:89] found id: "0d4585e869c6427f8e929cf0f1676242d1b0d51446bc8b9735b6c931aee3d98d"
	I1020 13:23:25.341593  494322 cri.go:89] found id: ""
	I1020 13:23:25.341654  494322 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:23:25.360643  494322 retry.go:31] will retry after 200.736926ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:23:25Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:23:25.562128  494322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:23:25.576151  494322 pause.go:52] kubelet running: false
	I1020 13:23:25.576270  494322 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:23:25.755502  494322 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:23:25.755606  494322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:23:25.828154  494322 cri.go:89] found id: "863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d"
	I1020 13:23:25.828223  494322 cri.go:89] found id: "295969f69655a7f6680e1af4de2531d515ee76538364168b72643bc1c0e2555c"
	I1020 13:23:25.828243  494322 cri.go:89] found id: "6c168c94f962c234b865074198edbc08aad0b0769ae1e96069c37ca5c002d8fe"
	I1020 13:23:25.828265  494322 cri.go:89] found id: "c5b36c984daad909967c8cc642e55e8e4193c1aa95b8708ae59d7ddad6f2d075"
	I1020 13:23:25.828286  494322 cri.go:89] found id: "c2da31ffb29883c5a509e1c067e67e14ad811168ec5c9dcca77b3fd063fead17"
	I1020 13:23:25.828320  494322 cri.go:89] found id: "a1f57d1b86d10e16e97306a3d10e424a14e07532b8216a6771718f9c926ae56d"
	I1020 13:23:25.828342  494322 cri.go:89] found id: "9d5c53a7bdae3f025044a87f8c5d2e1b320b8ceedb2b698caa614049aa2ebc06"
	I1020 13:23:25.828411  494322 cri.go:89] found id: "096f1cd30b37ce6efa7756c97e11d57278a6e55b13f1e328c2db6254d6777462"
	I1020 13:23:25.828438  494322 cri.go:89] found id: "56b7c71f81efc16edacd521e6aae411626e76d228a65e9add6a6a338fa9c8438"
	I1020 13:23:25.828476  494322 cri.go:89] found id: "025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	I1020 13:23:25.828510  494322 cri.go:89] found id: "0d4585e869c6427f8e929cf0f1676242d1b0d51446bc8b9735b6c931aee3d98d"
	I1020 13:23:25.828529  494322 cri.go:89] found id: ""
	I1020 13:23:25.828614  494322 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:23:25.840853  494322 retry.go:31] will retry after 388.964817ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:23:25Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:23:26.230370  494322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:23:26.243720  494322 pause.go:52] kubelet running: false
	I1020 13:23:26.243795  494322 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:23:26.418102  494322 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:23:26.418246  494322 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:23:26.493448  494322 cri.go:89] found id: "863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d"
	I1020 13:23:26.493487  494322 cri.go:89] found id: "295969f69655a7f6680e1af4de2531d515ee76538364168b72643bc1c0e2555c"
	I1020 13:23:26.493495  494322 cri.go:89] found id: "6c168c94f962c234b865074198edbc08aad0b0769ae1e96069c37ca5c002d8fe"
	I1020 13:23:26.493499  494322 cri.go:89] found id: "c5b36c984daad909967c8cc642e55e8e4193c1aa95b8708ae59d7ddad6f2d075"
	I1020 13:23:26.493503  494322 cri.go:89] found id: "c2da31ffb29883c5a509e1c067e67e14ad811168ec5c9dcca77b3fd063fead17"
	I1020 13:23:26.493516  494322 cri.go:89] found id: "a1f57d1b86d10e16e97306a3d10e424a14e07532b8216a6771718f9c926ae56d"
	I1020 13:23:26.493523  494322 cri.go:89] found id: "9d5c53a7bdae3f025044a87f8c5d2e1b320b8ceedb2b698caa614049aa2ebc06"
	I1020 13:23:26.493526  494322 cri.go:89] found id: "096f1cd30b37ce6efa7756c97e11d57278a6e55b13f1e328c2db6254d6777462"
	I1020 13:23:26.493533  494322 cri.go:89] found id: "56b7c71f81efc16edacd521e6aae411626e76d228a65e9add6a6a338fa9c8438"
	I1020 13:23:26.493540  494322 cri.go:89] found id: "025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	I1020 13:23:26.493546  494322 cri.go:89] found id: "0d4585e869c6427f8e929cf0f1676242d1b0d51446bc8b9735b6c931aee3d98d"
	I1020 13:23:26.493557  494322 cri.go:89] found id: ""
	I1020 13:23:26.493611  494322 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:23:26.509170  494322 out.go:203] 
	W1020 13:23:26.512106  494322 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:23:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:23:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 13:23:26.512131  494322 out.go:285] * 
	* 
	W1020 13:23:26.519410  494322 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 13:23:26.522243  494322 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-794175 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-794175
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-794175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4",
	        "Created": "2025-10-20T13:20:37.812533704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:22:16.562503218Z",
	            "FinishedAt": "2025-10-20T13:22:15.653102552Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/hosts",
	        "LogPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4-json.log",
	        "Name": "/default-k8s-diff-port-794175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-794175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-794175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4",
	                "LowerDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/merged",
	                "UpperDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/diff",
	                "WorkDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-794175",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-794175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-794175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-794175",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-794175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8bb54347c6a0847a4a3ce425fc4e4d9e13b24de57fe21f90bed8c768959bd344",
	            "SandboxKey": "/var/run/docker/netns/8bb54347c6a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-794175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:8f:a6:6c:c4:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f24d9859313beae9adf6bbf4afaf7590ce357fd35e4cb1d30db0d0f40ab82b66",
	                    "EndpointID": "bfbb49928f0d59d2ed93fe075f836ce4845a609138bc4aa9c60c3e498147f6e5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-794175",
	                        "a83c39bdcf1c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175: exit status 2 (361.81378ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-794175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-794175 logs -n 25: (1.383149599s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-123220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ delete  │ -p cert-options-123220                                                                                                                                                                                                                        │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ stop    │ -p old-k8s-version-995203 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-995203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:21 UTC │
	│ delete  │ -p cert-expiration-066011                                                                                                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:21 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:22:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:22:52.795809  492109 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:22:52.796160  492109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:22:52.796173  492109 out.go:374] Setting ErrFile to fd 2...
	I1020 13:22:52.796181  492109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:22:52.796612  492109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:22:52.797439  492109 out.go:368] Setting JSON to false
	I1020 13:22:52.798414  492109 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11123,"bootTime":1760955450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:22:52.798477  492109 start.go:141] virtualization:  
	I1020 13:22:52.801458  492109 out.go:179] * [embed-certs-979197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:22:52.805348  492109 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:22:52.805562  492109 notify.go:220] Checking for updates...
	I1020 13:22:52.811324  492109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:22:52.814240  492109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:22:52.817166  492109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:22:52.820021  492109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:22:52.822935  492109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:22:52.826307  492109 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:22:52.826870  492109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:22:52.854645  492109 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:22:52.854763  492109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:22:52.915579  492109 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:22:52.905724793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:22:52.915691  492109 docker.go:318] overlay module found
	I1020 13:22:52.918748  492109 out.go:179] * Using the docker driver based on existing profile
	I1020 13:22:52.921557  492109 start.go:305] selected driver: docker
	I1020 13:22:52.921576  492109 start.go:925] validating driver "docker" against &{Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:22:52.921689  492109 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:22:52.922430  492109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:22:52.987015  492109 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:22:52.976992844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:22:52.987366  492109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:22:52.987403  492109 cni.go:84] Creating CNI manager for ""
	I1020 13:22:52.987480  492109 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:22:52.987521  492109 start.go:349] cluster config:
	{Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:22:52.992532  492109 out.go:179] * Starting "embed-certs-979197" primary control-plane node in "embed-certs-979197" cluster
	I1020 13:22:52.995366  492109 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:22:52.998280  492109 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:22:53.001118  492109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:22:53.001147  492109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:22:53.001171  492109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:22:53.001181  492109 cache.go:58] Caching tarball of preloaded images
	I1020 13:22:53.001369  492109 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:22:53.001383  492109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:22:53.001499  492109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/config.json ...
	I1020 13:22:53.022523  492109 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:22:53.022555  492109 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:22:53.022578  492109 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:22:53.022602  492109 start.go:360] acquireMachinesLock for embed-certs-979197: {Name:mk95b0ada4992492fb672a02a9de970f7541a690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:22:53.022668  492109 start.go:364] duration metric: took 40.821µs to acquireMachinesLock for "embed-certs-979197"
	I1020 13:22:53.022692  492109 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:22:53.022700  492109 fix.go:54] fixHost starting: 
	I1020 13:22:53.022977  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:22:53.039824  492109 fix.go:112] recreateIfNeeded on embed-certs-979197: state=Stopped err=<nil>
	W1020 13:22:53.039866  492109 fix.go:138] unexpected machine state, will restart: <nil>
	W1020 13:22:51.263641  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:22:53.264827  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:22:55.764452  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:22:53.043076  492109 out.go:252] * Restarting existing docker container for "embed-certs-979197" ...
	I1020 13:22:53.043158  492109 cli_runner.go:164] Run: docker start embed-certs-979197
	I1020 13:22:53.305742  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:22:53.323292  492109 kic.go:430] container "embed-certs-979197" state is running.
	I1020 13:22:53.323688  492109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:22:53.347663  492109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/config.json ...
	I1020 13:22:53.347901  492109 machine.go:93] provisionDockerMachine start ...
	I1020 13:22:53.347964  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:53.369328  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:53.369653  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:53.369663  492109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:22:53.370399  492109 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60378->127.0.0.1:33443: read: connection reset by peer
	I1020 13:22:56.520106  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-979197
	
	I1020 13:22:56.520136  492109 ubuntu.go:182] provisioning hostname "embed-certs-979197"
	I1020 13:22:56.520210  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:56.538441  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:56.538758  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:56.538775  492109 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-979197 && echo "embed-certs-979197" | sudo tee /etc/hostname
	I1020 13:22:56.699556  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-979197
	
	I1020 13:22:56.699651  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:56.720174  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:56.720529  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:56.720557  492109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-979197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-979197/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-979197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:22:56.876618  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:22:56.876645  492109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:22:56.876674  492109 ubuntu.go:190] setting up certificates
	I1020 13:22:56.876685  492109 provision.go:84] configureAuth start
	I1020 13:22:56.876746  492109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:22:56.894102  492109 provision.go:143] copyHostCerts
	I1020 13:22:56.894163  492109 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:22:56.894181  492109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:22:56.894254  492109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:22:56.894344  492109 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:22:56.894349  492109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:22:56.894373  492109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:22:56.894420  492109 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:22:56.894425  492109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:22:56.894447  492109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:22:56.894489  492109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-979197 san=[127.0.0.1 192.168.85.2 embed-certs-979197 localhost minikube]
	I1020 13:22:57.560443  492109 provision.go:177] copyRemoteCerts
	I1020 13:22:57.560525  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:22:57.560572  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:57.579516  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:57.688274  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:22:57.705353  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:22:57.722570  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:22:57.739538  492109 provision.go:87] duration metric: took 862.824759ms to configureAuth
	I1020 13:22:57.739566  492109 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:22:57.739799  492109 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:22:57.739912  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:57.757031  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:57.757450  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:57.757471  492109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:22:58.091859  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:22:58.091883  492109 machine.go:96] duration metric: took 4.743971666s to provisionDockerMachine
	I1020 13:22:58.091894  492109 start.go:293] postStartSetup for "embed-certs-979197" (driver="docker")
	I1020 13:22:58.091904  492109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:22:58.091972  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:22:58.092011  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.115329  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.233331  492109 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:22:58.237129  492109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:22:58.237161  492109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:22:58.237174  492109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:22:58.237236  492109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:22:58.237331  492109 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:22:58.237448  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:22:58.245353  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:22:58.265393  492109 start.go:296] duration metric: took 173.484234ms for postStartSetup
	I1020 13:22:58.265556  492109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:22:58.265634  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.282740  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.385584  492109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:22:58.390658  492109 fix.go:56] duration metric: took 5.367951465s for fixHost
	I1020 13:22:58.390684  492109 start.go:83] releasing machines lock for "embed-certs-979197", held for 5.368004636s
	I1020 13:22:58.390784  492109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:22:58.412123  492109 ssh_runner.go:195] Run: cat /version.json
	I1020 13:22:58.412185  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.412485  492109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:22:58.412550  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.436631  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.448619  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.540755  492109 ssh_runner.go:195] Run: systemctl --version
	I1020 13:22:58.639848  492109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:22:58.676838  492109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:22:58.681979  492109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:22:58.682128  492109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:22:58.690249  492109 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:22:58.690276  492109 start.go:495] detecting cgroup driver to use...
	I1020 13:22:58.690311  492109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:22:58.690367  492109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:22:58.705625  492109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:22:58.718464  492109 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:22:58.718581  492109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:22:58.734927  492109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:22:58.749027  492109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:22:58.890513  492109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:22:59.025014  492109 docker.go:234] disabling docker service ...
	I1020 13:22:59.025138  492109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:22:59.043100  492109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:22:59.056349  492109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:22:59.179814  492109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:22:59.303401  492109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:22:59.316782  492109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:22:59.331422  492109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:22:59.331501  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.341107  492109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:22:59.341175  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.351661  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.360928  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.369648  492109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:22:59.377697  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.386684  492109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.395185  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.404300  492109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:22:59.411837  492109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:22:59.419565  492109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:22:59.539103  492109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:22:59.672708  492109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:22:59.672853  492109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:22:59.676949  492109 start.go:563] Will wait 60s for crictl version
	I1020 13:22:59.677061  492109 ssh_runner.go:195] Run: which crictl
	I1020 13:22:59.680639  492109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:22:59.710767  492109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:22:59.710926  492109 ssh_runner.go:195] Run: crio --version
	I1020 13:22:59.740516  492109 ssh_runner.go:195] Run: crio --version
	I1020 13:22:59.775382  492109 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1020 13:22:57.764712  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:23:00.285060  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:22:59.778149  492109 cli_runner.go:164] Run: docker network inspect embed-certs-979197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:22:59.794205  492109 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 13:22:59.798186  492109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:22:59.807916  492109 kubeadm.go:883] updating cluster {Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:22:59.808037  492109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:22:59.808094  492109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:22:59.844594  492109 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:22:59.844621  492109 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:22:59.844681  492109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:22:59.872655  492109 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:22:59.872685  492109 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:22:59.872694  492109 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 13:22:59.872813  492109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-979197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:22:59.872938  492109 ssh_runner.go:195] Run: crio config
	I1020 13:22:59.950431  492109 cni.go:84] Creating CNI manager for ""
	I1020 13:22:59.950453  492109 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:22:59.950468  492109 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:22:59.950490  492109 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-979197 NodeName:embed-certs-979197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:22:59.950613  492109 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-979197"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:22:59.950687  492109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:22:59.958770  492109 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:22:59.958850  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:22:59.966287  492109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1020 13:22:59.980789  492109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:22:59.995310  492109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1020 13:23:00.016404  492109 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:23:00.046119  492109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:23:00.107979  492109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:00.379383  492109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:23:00.407287  492109 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197 for IP: 192.168.85.2
	I1020 13:23:00.407320  492109 certs.go:195] generating shared ca certs ...
	I1020 13:23:00.407358  492109 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:00.407598  492109 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:23:00.407682  492109 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:23:00.407694  492109 certs.go:257] generating profile certs ...
	I1020 13:23:00.407802  492109 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.key
	I1020 13:23:00.407885  492109 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key.78ce9c55
	I1020 13:23:00.407947  492109 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key
	I1020 13:23:00.408101  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:23:00.408152  492109 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:23:00.408166  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:23:00.408191  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:23:00.408226  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:23:00.408255  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:23:00.408314  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:23:00.409354  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:23:00.450673  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:23:00.481740  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:23:00.507930  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:23:00.534723  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1020 13:23:00.560173  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:23:00.584722  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:23:00.612678  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:23:00.638040  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:23:00.657046  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:23:00.677116  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:23:00.697410  492109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:23:00.710610  492109 ssh_runner.go:195] Run: openssl version
	I1020 13:23:00.717342  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:23:00.726107  492109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:23:00.730300  492109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:23:00.730378  492109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:23:00.775895  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:23:00.783842  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:23:00.791891  492109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:23:00.795557  492109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:23:00.795652  492109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:23:00.841530  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:23:00.849703  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:23:00.858214  492109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:00.862005  492109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:00.862070  492109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:00.902686  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:23:00.911017  492109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:23:00.914880  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:23:00.958641  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:23:01.000289  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:23:01.048984  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:23:01.131269  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:23:01.269671  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:23:01.371041  492109 kubeadm.go:400] StartCluster: {Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:23:01.371199  492109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:23:01.371315  492109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:23:01.446862  492109 cri.go:89] found id: "e584e506b7520e3f3fc6c5efbd25f505db7a034d9a0b978b8af3a90afb94f84b"
	I1020 13:23:01.446940  492109 cri.go:89] found id: "fdf35c27cf71e3c6a3b8814a9f32bced0ae742f30f72aff6760a85b4a3a7145b"
	I1020 13:23:01.446960  492109 cri.go:89] found id: "aa8e7b9b68af423d774d170b1c024dba6f7323fa1d41441cd1e8ee87d1cd0140"
	I1020 13:23:01.446989  492109 cri.go:89] found id: "631a35129ac4de8ec7ce893c70fd5f816fb79609c9e434d0fb0f0fad3f58552b"
	I1020 13:23:01.447028  492109 cri.go:89] found id: ""
	I1020 13:23:01.447109  492109 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:23:01.474150  492109 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:23:01Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:23:01.474288  492109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:23:01.485789  492109 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:23:01.485859  492109 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:23:01.485944  492109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:23:01.495478  492109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:23:01.496189  492109 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-979197" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:23:01.496540  492109 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-979197" cluster setting kubeconfig missing "embed-certs-979197" context setting]
	I1020 13:23:01.497052  492109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:01.498892  492109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:23:01.511087  492109 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 13:23:01.511170  492109 kubeadm.go:601] duration metric: took 25.291285ms to restartPrimaryControlPlane
	I1020 13:23:01.511194  492109 kubeadm.go:402] duration metric: took 140.163076ms to StartCluster
	I1020 13:23:01.511240  492109 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:01.511335  492109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:23:01.512773  492109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:01.513195  492109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:23:01.513609  492109 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:23:01.513700  492109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:23:01.513885  492109 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-979197"
	I1020 13:23:01.513914  492109 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-979197"
	W1020 13:23:01.513925  492109 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:23:01.513965  492109 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:23:01.514608  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.514834  492109 addons.go:69] Setting dashboard=true in profile "embed-certs-979197"
	I1020 13:23:01.514858  492109 addons.go:238] Setting addon dashboard=true in "embed-certs-979197"
	W1020 13:23:01.514867  492109 addons.go:247] addon dashboard should already be in state true
	I1020 13:23:01.514897  492109 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:23:01.515197  492109 addons.go:69] Setting default-storageclass=true in profile "embed-certs-979197"
	I1020 13:23:01.515239  492109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-979197"
	I1020 13:23:01.515399  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.515633  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.518449  492109 out.go:179] * Verifying Kubernetes components...
	I1020 13:23:01.524674  492109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:01.582908  492109 addons.go:238] Setting addon default-storageclass=true in "embed-certs-979197"
	W1020 13:23:01.582931  492109 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:23:01.582955  492109 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:23:01.583400  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.585627  492109 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:23:01.590131  492109 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 13:23:01.590148  492109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:01.594062  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:23:01.594096  492109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:23:01.594110  492109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:23:01.594125  492109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:23:01.594184  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:23:01.594190  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:23:01.635148  492109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:23:01.635172  492109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:23:01.635238  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:23:01.656687  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:23:01.672321  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:23:01.691005  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:23:01.947356  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:23:01.947439  492109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:23:01.978676  492109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:23:02.007012  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:23:02.007115  492109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:23:02.030178  492109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:23:02.046284  492109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:23:02.090261  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:23:02.090341  492109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:23:02.217919  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:23:02.217992  492109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:23:02.344932  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:23:02.344954  492109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:23:02.384862  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:23:02.384884  492109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:23:02.402921  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:23:02.402942  492109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:23:02.420534  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:23:02.420556  492109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:23:02.440200  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:23:02.440263  492109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:23:02.469880  492109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1020 13:23:02.764113  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:23:04.764210  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:23:08.351669  492109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.37291266s)
	I1020 13:23:08.351732  492109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.32148221s)
	I1020 13:23:08.352101  492109 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.305747883s)
	I1020 13:23:08.352136  492109 node_ready.go:35] waiting up to 6m0s for node "embed-certs-979197" to be "Ready" ...
	I1020 13:23:08.378480  492109 node_ready.go:49] node "embed-certs-979197" is "Ready"
	I1020 13:23:08.378511  492109 node_ready.go:38] duration metric: took 26.358967ms for node "embed-certs-979197" to be "Ready" ...
	I1020 13:23:08.378534  492109 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:23:08.378646  492109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:23:08.397028  492109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.92705259s)
	I1020 13:23:08.400269  492109 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-979197 addons enable metrics-server
	
	I1020 13:23:08.403286  492109 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1020 13:23:07.264445  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:23:09.763065  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:23:10.264254  489182 pod_ready.go:94] pod "coredns-66bc5c9577-fgxwg" is "Ready"
	I1020 13:23:10.264286  489182 pod_ready.go:86] duration metric: took 39.006208579s for pod "coredns-66bc5c9577-fgxwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.267493  489182 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.272409  489182 pod_ready.go:94] pod "etcd-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:10.272435  489182 pod_ready.go:86] duration metric: took 4.890247ms for pod "etcd-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.275128  489182 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.284937  489182 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:10.284966  489182 pod_ready.go:86] duration metric: took 9.807144ms for pod "kube-apiserver-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.368616  489182 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.461754  489182 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:10.461782  489182 pod_ready.go:86] duration metric: took 93.135827ms for pod "kube-controller-manager-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.662274  489182 pod_ready.go:83] waiting for pod "kube-proxy-jkb75" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.062569  489182 pod_ready.go:94] pod "kube-proxy-jkb75" is "Ready"
	I1020 13:23:11.062640  489182 pod_ready.go:86] duration metric: took 400.299479ms for pod "kube-proxy-jkb75" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.261934  489182 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.662200  489182 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:11.662283  489182 pod_ready.go:86] duration metric: took 400.323529ms for pod "kube-scheduler-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.662313  489182 pod_ready.go:40] duration metric: took 40.471401352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:23:11.776206  489182 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:23:11.779742  489182 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-794175" cluster and "default" namespace by default
	I1020 13:23:08.406103  492109 addons.go:514] duration metric: took 6.892411671s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1020 13:23:08.422339  492109 api_server.go:72] duration metric: took 6.909070769s to wait for apiserver process to appear ...
	I1020 13:23:08.422374  492109 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:23:08.422395  492109 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:23:08.435461  492109 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:23:08.436744  492109 api_server.go:141] control plane version: v1.34.1
	I1020 13:23:08.436784  492109 api_server.go:131] duration metric: took 14.401642ms to wait for apiserver health ...
	I1020 13:23:08.436794  492109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:23:08.441618  492109 system_pods.go:59] 8 kube-system pods found
	I1020 13:23:08.441667  492109 system_pods.go:61] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:23:08.441676  492109 system_pods.go:61] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:23:08.441691  492109 system_pods.go:61] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:23:08.441723  492109 system_pods.go:61] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:23:08.441736  492109 system_pods.go:61] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:23:08.441742  492109 system_pods.go:61] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:23:08.441749  492109 system_pods.go:61] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:23:08.441765  492109 system_pods.go:61] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Running
	I1020 13:23:08.441771  492109 system_pods.go:74] duration metric: took 4.971987ms to wait for pod list to return data ...
	I1020 13:23:08.441783  492109 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:23:08.445203  492109 default_sa.go:45] found service account: "default"
	I1020 13:23:08.445239  492109 default_sa.go:55] duration metric: took 3.449137ms for default service account to be created ...
	I1020 13:23:08.445249  492109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:23:08.448648  492109 system_pods.go:86] 8 kube-system pods found
	I1020 13:23:08.448690  492109 system_pods.go:89] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:23:08.448700  492109 system_pods.go:89] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:23:08.448707  492109 system_pods.go:89] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:23:08.448715  492109 system_pods.go:89] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:23:08.448731  492109 system_pods.go:89] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:23:08.448741  492109 system_pods.go:89] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:23:08.448758  492109 system_pods.go:89] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:23:08.448768  492109 system_pods.go:89] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Running
	I1020 13:23:08.448776  492109 system_pods.go:126] duration metric: took 3.520851ms to wait for k8s-apps to be running ...
	I1020 13:23:08.448787  492109 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:23:08.448852  492109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:23:08.470222  492109 system_svc.go:56] duration metric: took 21.425889ms WaitForService to wait for kubelet
	I1020 13:23:08.470251  492109 kubeadm.go:586] duration metric: took 6.956986721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:23:08.470282  492109 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:23:08.473481  492109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:23:08.473553  492109 node_conditions.go:123] node cpu capacity is 2
	I1020 13:23:08.473581  492109 node_conditions.go:105] duration metric: took 3.29176ms to run NodePressure ...
	I1020 13:23:08.473606  492109 start.go:241] waiting for startup goroutines ...
	I1020 13:23:08.473643  492109 start.go:246] waiting for cluster config update ...
	I1020 13:23:08.473676  492109 start.go:255] writing updated cluster config ...
	I1020 13:23:08.474013  492109 ssh_runner.go:195] Run: rm -f paused
	I1020 13:23:08.477761  492109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:23:08.481881  492109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9hxmm" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 13:23:10.496746  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:12.988161  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:14.988428  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:17.489316  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:19.987651  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:21.990430  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.166903946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c06f233c-0e2b-423d-be8f-7ce72b787b6a name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.168135987Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a1d7f430-babe-41ea-8a40-76dae6528f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.168254831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.178818794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.192611417Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f1e6d6089ddd038c2deb2f59237d78c4eb261b2e90468820b55ac95aa15d3cce/merged/etc/passwd: no such file or directory"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.192694019Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f1e6d6089ddd038c2deb2f59237d78c4eb261b2e90468820b55ac95aa15d3cce/merged/etc/group: no such file or directory"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.193222658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.23457849Z" level=info msg="Created container 863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d: kube-system/storage-provisioner/storage-provisioner" id=a1d7f430-babe-41ea-8a40-76dae6528f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.235643661Z" level=info msg="Starting container: 863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d" id=61491660-1a50-4828-a3b0-b8a0477ed3af name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.238737191Z" level=info msg="Started container" PID=1634 containerID=863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d description=kube-system/storage-provisioner/storage-provisioner id=61491660-1a50-4828-a3b0-b8a0477ed3af name=/runtime.v1.RuntimeService/StartContainer sandboxID=83ffdf4fee19656a187926f87ff33c9a3797027fc5e31c6a1eb073791c3ccc44
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.730624524Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.734804427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.734958866Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.735036504Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.740525454Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.740713182Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.74079347Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.74445202Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.744486827Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.744513346Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.749912055Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.749948125Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.74996895Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.756864589Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.756899757Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	863af67c2dcab       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   83ffdf4fee196       storage-provisioner                                    kube-system
	025be26ce1b35       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   5a4e3c60352bd       dashboard-metrics-scraper-6ffb444bf9-nzzsl             kubernetes-dashboard
	0d4585e869c64       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   9e5a0acd70164       kubernetes-dashboard-855c9754f9-spstf                  kubernetes-dashboard
	295969f69655a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   8f717c96ad568       coredns-66bc5c9577-fgxwg                               kube-system
	24a62685df217       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   2b5872d902bbc       busybox                                                default
	6c168c94f962c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   633da3c6f3067       kube-proxy-jkb75                                       kube-system
	c5b36c984daad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   bea2eda7425c2       kindnet-9w4q8                                          kube-system
	c2da31ffb2988       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   83ffdf4fee196       storage-provisioner                                    kube-system
	a1f57d1b86d10       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   d4408c52ccceb       kube-controller-manager-default-k8s-diff-port-794175   kube-system
	9d5c53a7bdae3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   70b746c834d7c       kube-apiserver-default-k8s-diff-port-794175            kube-system
	096f1cd30b37c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e43dba1df4b7b       kube-scheduler-default-k8s-diff-port-794175            kube-system
	56b7c71f81efc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2756e421af61c       etcd-default-k8s-diff-port-794175                      kube-system
	
	
	==> coredns [295969f69655a7f6680e1af4de2531d515ee76538364168b72643bc1c0e2555c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54007 - 43100 "HINFO IN 7076610121752415219.6253543287017802264. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01564955s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-794175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-794175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=default-k8s-diff-port-794175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_21_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:20:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-794175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:23:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-794175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e9dbb7f7-719c-4a64-84f6-74d2f47cffc5
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-fgxwg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-794175                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-9w4q8                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-794175             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-794175    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-jkb75                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-794175             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nzzsl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-spstf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m19s              kube-proxy       
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m25s              kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m25s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s              kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s              kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m25s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s              node-controller  Node default-k8s-diff-port-794175 event: Registered Node default-k8s-diff-port-794175 in Controller
	  Normal   NodeReady                99s                kubelet          Node default-k8s-diff-port-794175 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-794175 event: Registered Node default-k8s-diff-port-794175 in Controller
	
	
	==> dmesg <==
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [56b7c71f81efc16edacd521e6aae411626e76d228a65e9add6a6a338fa9c8438] <==
	{"level":"warn","ts":"2025-10-20T13:22:27.991696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.008732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.033158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.056771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.067371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.084723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.098314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.115963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.132139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.152083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.166742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.182811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.204626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.219971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.235909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.252502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.278223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.303994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.319713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.334443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.356519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.384229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.398807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.414165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.465802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45560","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:23:27 up  3:05,  0 user,  load average: 2.78, 2.68, 2.48
	Linux default-k8s-diff-port-794175 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c5b36c984daad909967c8cc642e55e8e4193c1aa95b8708ae59d7ddad6f2d075] <==
	I1020 13:22:30.531487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:22:30.603046       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:22:30.603206       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:22:30.603219       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:22:30.603230       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:22:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:22:30.729955       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:22:30.729972       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:22:30.729980       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:22:30.730101       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:23:00.733872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:23:00.733987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:23:00.734008       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:23:00.734069       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1020 13:23:02.330247       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:23:02.330403       1 metrics.go:72] Registering metrics
	I1020 13:23:02.330525       1 controller.go:711] "Syncing nftables rules"
	I1020 13:23:10.730198       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:23:10.730365       1 main.go:301] handling current node
	I1020 13:23:20.736438       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:23:20.736475       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d5c53a7bdae3f025044a87f8c5d2e1b320b8ceedb2b698caa614049aa2ebc06] <==
	I1020 13:22:29.527254       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 13:22:29.527285       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 13:22:29.527567       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 13:22:29.527745       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:22:29.527793       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:22:29.548022       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:22:29.552124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 13:22:29.552492       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 13:22:29.556681       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:22:29.561251       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 13:22:29.561701       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:22:29.569393       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:22:29.570967       1 cache.go:39] Caches are synced for autoregister controller
	E1020 13:22:29.614855       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:22:30.062274       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:22:30.161058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:22:30.699837       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:22:30.768745       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:22:30.819945       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:22:30.837440       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:22:30.951404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.224.15"}
	I1020 13:22:30.980326       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.64.37"}
	I1020 13:22:32.814326       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:22:33.163989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:22:33.362856       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a1f57d1b86d10e16e97306a3d10e424a14e07532b8216a6771718f9c926ae56d] <==
	I1020 13:22:32.813866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:22:32.813942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 13:22:32.816477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:22:32.816635       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 13:22:32.820263       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 13:22:32.822650       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 13:22:32.824454       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:22:32.826927       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 13:22:32.828113       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 13:22:32.829235       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 13:22:32.830359       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:22:32.832587       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 13:22:32.835213       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:22:32.844502       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:22:32.856236       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:22:32.856249       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 13:22:32.856283       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:22:32.856512       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:22:32.856437       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 13:22:32.856614       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:22:32.856754       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-794175"
	I1020 13:22:32.856834       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 13:22:32.857398       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:22:32.858946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:22:32.866174       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [6c168c94f962c234b865074198edbc08aad0b0769ae1e96069c37ca5c002d8fe] <==
	I1020 13:22:30.797047       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:22:30.910540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:22:31.015475       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:22:31.015530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:22:31.015606       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:22:31.132320       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:22:31.132468       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:22:31.148677       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:22:31.149133       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:22:31.149389       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:22:31.150822       1 config.go:200] "Starting service config controller"
	I1020 13:22:31.150946       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:22:31.151004       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:22:31.151033       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:22:31.151049       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:22:31.151053       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:22:31.154557       1 config.go:309] "Starting node config controller"
	I1020 13:22:31.162848       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:22:31.162934       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:22:31.251286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:22:31.251287       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:22:31.251313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [096f1cd30b37ce6efa7756c97e11d57278a6e55b13f1e328c2db6254d6777462] <==
	I1020 13:22:27.516482       1 serving.go:386] Generated self-signed cert in-memory
	W1020 13:22:29.398928       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 13:22:29.398955       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 13:22:29.398965       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 13:22:29.398972       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 13:22:29.566990       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:22:29.577839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:22:29.580756       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:22:29.583844       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:22:29.583886       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:22:29.583905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:22:29.684665       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:22:33 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:33.588272     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5qm\" (UniqueName: \"kubernetes.io/projected/a094196e-f7e4-45b1-9a0a-72749b039ea4-kube-api-access-ww5qm\") pod \"dashboard-metrics-scraper-6ffb444bf9-nzzsl\" (UID: \"a094196e-f7e4-45b1-9a0a-72749b039ea4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl"
	Oct 20 13:22:33 default-k8s-diff-port-794175 kubelet[773]: W1020 13:22:33.781408     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-5a4e3c60352bd8f827498b7782b144b02c667cad2c9a77b20fbf72df8cd4fc36 WatchSource:0}: Error finding container 5a4e3c60352bd8f827498b7782b144b02c667cad2c9a77b20fbf72df8cd4fc36: Status 404 returned error can't find the container with id 5a4e3c60352bd8f827498b7782b144b02c667cad2c9a77b20fbf72df8cd4fc36
	Oct 20 13:22:33 default-k8s-diff-port-794175 kubelet[773]: W1020 13:22:33.799137     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-9e5a0acd701649be3f66c3ac221e182e3376b662c61f3f8deb8307fceb84f80b WatchSource:0}: Error finding container 9e5a0acd701649be3f66c3ac221e182e3376b662c61f3f8deb8307fceb84f80b: Status 404 returned error can't find the container with id 9e5a0acd701649be3f66c3ac221e182e3376b662c61f3f8deb8307fceb84f80b
	Oct 20 13:22:39 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:39.076449     773 scope.go:117] "RemoveContainer" containerID="dc884a6f38fedde045726b07bfa831f8071310f5ca97a59a5f2a23c3c35a9d4c"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:40.051594     773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:40.088605     773 scope.go:117] "RemoveContainer" containerID="dc884a6f38fedde045726b07bfa831f8071310f5ca97a59a5f2a23c3c35a9d4c"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:40.089606     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:40.090131     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:22:41 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:41.092593     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:41 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:41.092758     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:22:44 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:44.764260     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:44 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:44.764514     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:22:45 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:45.147262     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-spstf" podStartSLOduration=1.603767429 podStartE2EDuration="12.147239854s" podCreationTimestamp="2025-10-20 13:22:33 +0000 UTC" firstStartedPulling="2025-10-20 13:22:33.802737972 +0000 UTC m=+10.111748251" lastFinishedPulling="2025-10-20 13:22:44.346210405 +0000 UTC m=+20.655220676" observedRunningTime="2025-10-20 13:22:45.146575895 +0000 UTC m=+21.455586174" watchObservedRunningTime="2025-10-20 13:22:45.147239854 +0000 UTC m=+21.456250133"
	Oct 20 13:22:56 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:56.899027     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:57 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:57.135301     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:57 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:57.135698     773 scope.go:117] "RemoveContainer" containerID="025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	Oct 20 13:22:57 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:57.136042     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:23:01 default-k8s-diff-port-794175 kubelet[773]: I1020 13:23:01.164273     773 scope.go:117] "RemoveContainer" containerID="c2da31ffb29883c5a509e1c067e67e14ad811168ec5c9dcca77b3fd063fead17"
	Oct 20 13:23:04 default-k8s-diff-port-794175 kubelet[773]: I1020 13:23:04.764632     773 scope.go:117] "RemoveContainer" containerID="025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	Oct 20 13:23:04 default-k8s-diff-port-794175 kubelet[773]: E1020 13:23:04.765268     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:23:16 default-k8s-diff-port-794175 kubelet[773]: I1020 13:23:16.899383     773 scope.go:117] "RemoveContainer" containerID="025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	Oct 20 13:23:16 default-k8s-diff-port-794175 kubelet[773]: E1020 13:23:16.899571     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:23:25 default-k8s-diff-port-794175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:23:25 default-k8s-diff-port-794175 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:23:25 default-k8s-diff-port-794175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0d4585e869c6427f8e929cf0f1676242d1b0d51446bc8b9735b6c931aee3d98d] <==
	2025/10/20 13:22:44 Starting overwatch
	2025/10/20 13:22:44 Using namespace: kubernetes-dashboard
	2025/10/20 13:22:44 Using in-cluster config to connect to apiserver
	2025/10/20 13:22:44 Using secret token for csrf signing
	2025/10/20 13:22:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:22:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:22:44 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 13:22:44 Generating JWE encryption key
	2025/10/20 13:22:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:22:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:22:44 Initializing JWE encryption key from synchronized object
	2025/10/20 13:22:44 Creating in-cluster Sidecar client
	2025/10/20 13:22:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:22:44 Serving insecurely on HTTP port: 9090
	2025/10/20 13:23:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d] <==
	I1020 13:23:01.291112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:23:01.325996       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:23:01.326042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:23:01.328537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:04.785581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:09.046052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:12.644929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:15.698200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:18.720261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:18.725332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:18.725497       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:23:18.725659       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-794175_661e2458-9d9c-466e-9c8f-f14a92ade907!
	I1020 13:23:18.726570       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"64224ef5-8dba-4cbf-9a3f-49d2b765cfef", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-794175_661e2458-9d9c-466e-9c8f-f14a92ade907 became leader
	W1020 13:23:18.730020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:18.739531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:18.826552       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-794175_661e2458-9d9c-466e-9c8f-f14a92ade907!
	W1020 13:23:20.742877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:20.754836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:22.758013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:22.762917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:24.766375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:24.844611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:26.848309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:26.854917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c2da31ffb29883c5a509e1c067e67e14ad811168ec5c9dcca77b3fd063fead17] <==
	I1020 13:22:30.390750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:23:00.393382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175: exit status 2 (386.574534ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-794175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-794175
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-794175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4",
	        "Created": "2025-10-20T13:20:37.812533704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489307,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:22:16.562503218Z",
	            "FinishedAt": "2025-10-20T13:22:15.653102552Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/hostname",
	        "HostsPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/hosts",
	        "LogPath": "/var/lib/docker/containers/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4-json.log",
	        "Name": "/default-k8s-diff-port-794175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-794175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-794175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4",
	                "LowerDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/merged",
	                "UpperDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/diff",
	                "WorkDir": "/var/lib/docker/overlay2/febb176e7484dd8939baecdca965d7bad92d70ef2d6e3458244eb69cdf6fb284/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-794175",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-794175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-794175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-794175",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-794175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8bb54347c6a0847a4a3ce425fc4e4d9e13b24de57fe21f90bed8c768959bd344",
	            "SandboxKey": "/var/run/docker/netns/8bb54347c6a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-794175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:8f:a6:6c:c4:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f24d9859313beae9adf6bbf4afaf7590ce357fd35e4cb1d30db0d0f40ab82b66",
	                    "EndpointID": "bfbb49928f0d59d2ed93fe075f836ce4845a609138bc4aa9c60c3e498147f6e5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-794175",
	                        "a83c39bdcf1c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175: exit status 2 (350.015168ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-794175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-794175 logs -n 25: (1.384963707s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-123220 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ delete  │ -p cert-options-123220                                                                                                                                                                                                                        │ cert-options-123220          │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:17 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:17 UTC │ 20 Oct 25 13:18 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-995203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │                     │
	│ stop    │ -p old-k8s-version-995203 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-995203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:19 UTC │
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:21 UTC │
	│ delete  │ -p cert-expiration-066011                                                                                                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:21 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:22:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:22:52.795809  492109 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:22:52.796160  492109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:22:52.796173  492109 out.go:374] Setting ErrFile to fd 2...
	I1020 13:22:52.796181  492109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:22:52.796612  492109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:22:52.797439  492109 out.go:368] Setting JSON to false
	I1020 13:22:52.798414  492109 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11123,"bootTime":1760955450,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:22:52.798477  492109 start.go:141] virtualization:  
	I1020 13:22:52.801458  492109 out.go:179] * [embed-certs-979197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:22:52.805348  492109 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:22:52.805562  492109 notify.go:220] Checking for updates...
	I1020 13:22:52.811324  492109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:22:52.814240  492109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:22:52.817166  492109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:22:52.820021  492109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:22:52.822935  492109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:22:52.826307  492109 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:22:52.826870  492109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:22:52.854645  492109 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:22:52.854763  492109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:22:52.915579  492109 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:22:52.905724793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:22:52.915691  492109 docker.go:318] overlay module found
	I1020 13:22:52.918748  492109 out.go:179] * Using the docker driver based on existing profile
	I1020 13:22:52.921557  492109 start.go:305] selected driver: docker
	I1020 13:22:52.921576  492109 start.go:925] validating driver "docker" against &{Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:22:52.921689  492109 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:22:52.922430  492109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:22:52.987015  492109 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:22:52.976992844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:22:52.987366  492109 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:22:52.987403  492109 cni.go:84] Creating CNI manager for ""
	I1020 13:22:52.987480  492109 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:22:52.987521  492109 start.go:349] cluster config:
	{Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:22:52.992532  492109 out.go:179] * Starting "embed-certs-979197" primary control-plane node in "embed-certs-979197" cluster
	I1020 13:22:52.995366  492109 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:22:52.998280  492109 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:22:53.001118  492109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:22:53.001147  492109 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:22:53.001171  492109 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:22:53.001181  492109 cache.go:58] Caching tarball of preloaded images
	I1020 13:22:53.001369  492109 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:22:53.001383  492109 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:22:53.001499  492109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/config.json ...
	I1020 13:22:53.022523  492109 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:22:53.022555  492109 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:22:53.022578  492109 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:22:53.022602  492109 start.go:360] acquireMachinesLock for embed-certs-979197: {Name:mk95b0ada4992492fb672a02a9de970f7541a690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:22:53.022668  492109 start.go:364] duration metric: took 40.821µs to acquireMachinesLock for "embed-certs-979197"
	I1020 13:22:53.022692  492109 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:22:53.022700  492109 fix.go:54] fixHost starting: 
	I1020 13:22:53.022977  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:22:53.039824  492109 fix.go:112] recreateIfNeeded on embed-certs-979197: state=Stopped err=<nil>
	W1020 13:22:53.039866  492109 fix.go:138] unexpected machine state, will restart: <nil>
	W1020 13:22:51.263641  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:22:53.264827  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:22:55.764452  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:22:53.043076  492109 out.go:252] * Restarting existing docker container for "embed-certs-979197" ...
	I1020 13:22:53.043158  492109 cli_runner.go:164] Run: docker start embed-certs-979197
	I1020 13:22:53.305742  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:22:53.323292  492109 kic.go:430] container "embed-certs-979197" state is running.
	I1020 13:22:53.323688  492109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:22:53.347663  492109 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/config.json ...
	I1020 13:22:53.347901  492109 machine.go:93] provisionDockerMachine start ...
	I1020 13:22:53.347964  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:53.369328  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:53.369653  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:53.369663  492109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:22:53.370399  492109 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60378->127.0.0.1:33443: read: connection reset by peer
	I1020 13:22:56.520106  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-979197
	
	I1020 13:22:56.520136  492109 ubuntu.go:182] provisioning hostname "embed-certs-979197"
	I1020 13:22:56.520210  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:56.538441  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:56.538758  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:56.538775  492109 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-979197 && echo "embed-certs-979197" | sudo tee /etc/hostname
	I1020 13:22:56.699556  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-979197
	
	I1020 13:22:56.699651  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:56.720174  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:56.720529  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:56.720557  492109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-979197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-979197/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-979197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:22:56.876618  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:22:56.876645  492109 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:22:56.876674  492109 ubuntu.go:190] setting up certificates
	I1020 13:22:56.876685  492109 provision.go:84] configureAuth start
	I1020 13:22:56.876746  492109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:22:56.894102  492109 provision.go:143] copyHostCerts
	I1020 13:22:56.894163  492109 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:22:56.894181  492109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:22:56.894254  492109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:22:56.894344  492109 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:22:56.894349  492109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:22:56.894373  492109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:22:56.894420  492109 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:22:56.894425  492109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:22:56.894447  492109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:22:56.894489  492109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.embed-certs-979197 san=[127.0.0.1 192.168.85.2 embed-certs-979197 localhost minikube]
	I1020 13:22:57.560443  492109 provision.go:177] copyRemoteCerts
	I1020 13:22:57.560525  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:22:57.560572  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:57.579516  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:57.688274  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:22:57.705353  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:22:57.722570  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:22:57.739538  492109 provision.go:87] duration metric: took 862.824759ms to configureAuth
	I1020 13:22:57.739566  492109 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:22:57.739799  492109 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:22:57.739912  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:57.757031  492109 main.go:141] libmachine: Using SSH client type: native
	I1020 13:22:57.757450  492109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1020 13:22:57.757471  492109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:22:58.091859  492109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:22:58.091883  492109 machine.go:96] duration metric: took 4.743971666s to provisionDockerMachine
	I1020 13:22:58.091894  492109 start.go:293] postStartSetup for "embed-certs-979197" (driver="docker")
	I1020 13:22:58.091904  492109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:22:58.091972  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:22:58.092011  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.115329  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.233331  492109 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:22:58.237129  492109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:22:58.237161  492109 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:22:58.237174  492109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:22:58.237236  492109 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:22:58.237331  492109 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:22:58.237448  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:22:58.245353  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:22:58.265393  492109 start.go:296] duration metric: took 173.484234ms for postStartSetup
	I1020 13:22:58.265556  492109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:22:58.265634  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.282740  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.385584  492109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:22:58.390658  492109 fix.go:56] duration metric: took 5.367951465s for fixHost
	I1020 13:22:58.390684  492109 start.go:83] releasing machines lock for "embed-certs-979197", held for 5.368004636s
	I1020 13:22:58.390784  492109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-979197
	I1020 13:22:58.412123  492109 ssh_runner.go:195] Run: cat /version.json
	I1020 13:22:58.412185  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.412485  492109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:22:58.412550  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:22:58.436631  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.448619  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:22:58.540755  492109 ssh_runner.go:195] Run: systemctl --version
	I1020 13:22:58.639848  492109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:22:58.676838  492109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:22:58.681979  492109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:22:58.682128  492109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:22:58.690249  492109 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:22:58.690276  492109 start.go:495] detecting cgroup driver to use...
	I1020 13:22:58.690311  492109 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:22:58.690367  492109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:22:58.705625  492109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:22:58.718464  492109 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:22:58.718581  492109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:22:58.734927  492109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:22:58.749027  492109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:22:58.890513  492109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:22:59.025014  492109 docker.go:234] disabling docker service ...
	I1020 13:22:59.025138  492109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:22:59.043100  492109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:22:59.056349  492109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:22:59.179814  492109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:22:59.303401  492109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:22:59.316782  492109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:22:59.331422  492109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:22:59.331501  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.341107  492109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:22:59.341175  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.351661  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.360928  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.369648  492109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:22:59.377697  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.386684  492109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.395185  492109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:22:59.404300  492109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:22:59.411837  492109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:22:59.419565  492109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:22:59.539103  492109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:22:59.672708  492109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:22:59.672853  492109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:22:59.676949  492109 start.go:563] Will wait 60s for crictl version
	I1020 13:22:59.677061  492109 ssh_runner.go:195] Run: which crictl
	I1020 13:22:59.680639  492109 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:22:59.710767  492109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:22:59.710926  492109 ssh_runner.go:195] Run: crio --version
	I1020 13:22:59.740516  492109 ssh_runner.go:195] Run: crio --version
	I1020 13:22:59.775382  492109 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1020 13:22:57.764712  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:23:00.285060  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:22:59.778149  492109 cli_runner.go:164] Run: docker network inspect embed-certs-979197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:22:59.794205  492109 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 13:22:59.798186  492109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:22:59.807916  492109 kubeadm.go:883] updating cluster {Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:22:59.808037  492109 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:22:59.808094  492109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:22:59.844594  492109 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:22:59.844621  492109 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:22:59.844681  492109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:22:59.872655  492109 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:22:59.872685  492109 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:22:59.872694  492109 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 13:22:59.872813  492109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-979197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:22:59.872938  492109 ssh_runner.go:195] Run: crio config
	I1020 13:22:59.950431  492109 cni.go:84] Creating CNI manager for ""
	I1020 13:22:59.950453  492109 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:22:59.950468  492109 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:22:59.950490  492109 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-979197 NodeName:embed-certs-979197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:22:59.950613  492109 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-979197"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:22:59.950687  492109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:22:59.958770  492109 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:22:59.958850  492109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:22:59.966287  492109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1020 13:22:59.980789  492109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:22:59.995310  492109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1020 13:23:00.016404  492109 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:23:00.046119  492109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:23:00.107979  492109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:00.379383  492109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:23:00.407287  492109 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197 for IP: 192.168.85.2
	I1020 13:23:00.407320  492109 certs.go:195] generating shared ca certs ...
	I1020 13:23:00.407358  492109 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:00.407598  492109 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:23:00.407682  492109 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:23:00.407694  492109 certs.go:257] generating profile certs ...
	I1020 13:23:00.407802  492109 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/client.key
	I1020 13:23:00.407885  492109 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key.78ce9c55
	I1020 13:23:00.407947  492109 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key
	I1020 13:23:00.408101  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:23:00.408152  492109 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:23:00.408166  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:23:00.408191  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:23:00.408226  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:23:00.408255  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:23:00.408314  492109 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:23:00.409354  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:23:00.450673  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:23:00.481740  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:23:00.507930  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:23:00.534723  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1020 13:23:00.560173  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:23:00.584722  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:23:00.612678  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/embed-certs-979197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:23:00.638040  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:23:00.657046  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:23:00.677116  492109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:23:00.697410  492109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:23:00.710610  492109 ssh_runner.go:195] Run: openssl version
	I1020 13:23:00.717342  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:23:00.726107  492109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:23:00.730300  492109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:23:00.730378  492109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:23:00.775895  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:23:00.783842  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:23:00.791891  492109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:23:00.795557  492109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:23:00.795652  492109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:23:00.841530  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:23:00.849703  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:23:00.858214  492109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:00.862005  492109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:00.862070  492109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:00.902686  492109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:23:00.911017  492109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:23:00.914880  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:23:00.958641  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:23:01.000289  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:23:01.048984  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:23:01.131269  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:23:01.269671  492109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:23:01.371041  492109 kubeadm.go:400] StartCluster: {Name:embed-certs-979197 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-979197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:23:01.371199  492109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:23:01.371315  492109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:23:01.446862  492109 cri.go:89] found id: "e584e506b7520e3f3fc6c5efbd25f505db7a034d9a0b978b8af3a90afb94f84b"
	I1020 13:23:01.446940  492109 cri.go:89] found id: "fdf35c27cf71e3c6a3b8814a9f32bced0ae742f30f72aff6760a85b4a3a7145b"
	I1020 13:23:01.446960  492109 cri.go:89] found id: "aa8e7b9b68af423d774d170b1c024dba6f7323fa1d41441cd1e8ee87d1cd0140"
	I1020 13:23:01.446989  492109 cri.go:89] found id: "631a35129ac4de8ec7ce893c70fd5f816fb79609c9e434d0fb0f0fad3f58552b"
	I1020 13:23:01.447028  492109 cri.go:89] found id: ""
	I1020 13:23:01.447109  492109 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:23:01.474150  492109 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:23:01Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:23:01.474288  492109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:23:01.485789  492109 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:23:01.485859  492109 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:23:01.485944  492109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:23:01.495478  492109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:23:01.496189  492109 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-979197" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:23:01.496540  492109 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-979197" cluster setting kubeconfig missing "embed-certs-979197" context setting]
	I1020 13:23:01.497052  492109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:01.498892  492109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:23:01.511087  492109 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 13:23:01.511170  492109 kubeadm.go:601] duration metric: took 25.291285ms to restartPrimaryControlPlane
	I1020 13:23:01.511194  492109 kubeadm.go:402] duration metric: took 140.163076ms to StartCluster
	I1020 13:23:01.511240  492109 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:01.511335  492109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:23:01.512773  492109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:01.513195  492109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:23:01.513609  492109 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:23:01.513700  492109 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:23:01.513885  492109 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-979197"
	I1020 13:23:01.513914  492109 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-979197"
	W1020 13:23:01.513925  492109 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:23:01.513965  492109 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:23:01.514608  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.514834  492109 addons.go:69] Setting dashboard=true in profile "embed-certs-979197"
	I1020 13:23:01.514858  492109 addons.go:238] Setting addon dashboard=true in "embed-certs-979197"
	W1020 13:23:01.514867  492109 addons.go:247] addon dashboard should already be in state true
	I1020 13:23:01.514897  492109 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:23:01.515197  492109 addons.go:69] Setting default-storageclass=true in profile "embed-certs-979197"
	I1020 13:23:01.515239  492109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-979197"
	I1020 13:23:01.515399  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.515633  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.518449  492109 out.go:179] * Verifying Kubernetes components...
	I1020 13:23:01.524674  492109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:01.582908  492109 addons.go:238] Setting addon default-storageclass=true in "embed-certs-979197"
	W1020 13:23:01.582931  492109 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:23:01.582955  492109 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:23:01.583400  492109 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:23:01.585627  492109 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:23:01.590131  492109 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 13:23:01.590148  492109 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:01.594062  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:23:01.594096  492109 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:23:01.594110  492109 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:23:01.594125  492109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:23:01.594184  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:23:01.594190  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:23:01.635148  492109 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:23:01.635172  492109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:23:01.635238  492109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:23:01.656687  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:23:01.672321  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:23:01.691005  492109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:23:01.947356  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:23:01.947439  492109 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:23:01.978676  492109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:23:02.007012  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:23:02.007115  492109 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:23:02.030178  492109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:23:02.046284  492109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:23:02.090261  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:23:02.090341  492109 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:23:02.217919  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:23:02.217992  492109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:23:02.344932  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:23:02.344954  492109 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:23:02.384862  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:23:02.384884  492109 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:23:02.402921  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:23:02.402942  492109 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:23:02.420534  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:23:02.420556  492109 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:23:02.440200  492109 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:23:02.440263  492109 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:23:02.469880  492109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1020 13:23:02.764113  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:23:04.764210  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:23:08.351669  492109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.37291266s)
	I1020 13:23:08.351732  492109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.32148221s)
	I1020 13:23:08.352101  492109 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.305747883s)
	I1020 13:23:08.352136  492109 node_ready.go:35] waiting up to 6m0s for node "embed-certs-979197" to be "Ready" ...
	I1020 13:23:08.378480  492109 node_ready.go:49] node "embed-certs-979197" is "Ready"
	I1020 13:23:08.378511  492109 node_ready.go:38] duration metric: took 26.358967ms for node "embed-certs-979197" to be "Ready" ...
	I1020 13:23:08.378534  492109 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:23:08.378646  492109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:23:08.397028  492109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.92705259s)
	I1020 13:23:08.400269  492109 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-979197 addons enable metrics-server
	
	I1020 13:23:08.403286  492109 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1020 13:23:07.264445  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	W1020 13:23:09.763065  489182 pod_ready.go:104] pod "coredns-66bc5c9577-fgxwg" is not "Ready", error: <nil>
	I1020 13:23:10.264254  489182 pod_ready.go:94] pod "coredns-66bc5c9577-fgxwg" is "Ready"
	I1020 13:23:10.264286  489182 pod_ready.go:86] duration metric: took 39.006208579s for pod "coredns-66bc5c9577-fgxwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.267493  489182 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.272409  489182 pod_ready.go:94] pod "etcd-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:10.272435  489182 pod_ready.go:86] duration metric: took 4.890247ms for pod "etcd-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.275128  489182 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.284937  489182 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:10.284966  489182 pod_ready.go:86] duration metric: took 9.807144ms for pod "kube-apiserver-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.368616  489182 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.461754  489182 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:10.461782  489182 pod_ready.go:86] duration metric: took 93.135827ms for pod "kube-controller-manager-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:10.662274  489182 pod_ready.go:83] waiting for pod "kube-proxy-jkb75" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.062569  489182 pod_ready.go:94] pod "kube-proxy-jkb75" is "Ready"
	I1020 13:23:11.062640  489182 pod_ready.go:86] duration metric: took 400.299479ms for pod "kube-proxy-jkb75" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.261934  489182 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.662200  489182 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-794175" is "Ready"
	I1020 13:23:11.662283  489182 pod_ready.go:86] duration metric: took 400.323529ms for pod "kube-scheduler-default-k8s-diff-port-794175" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:11.662313  489182 pod_ready.go:40] duration metric: took 40.471401352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:23:11.776206  489182 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:23:11.779742  489182 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-794175" cluster and "default" namespace by default
	I1020 13:23:08.406103  492109 addons.go:514] duration metric: took 6.892411671s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1020 13:23:08.422339  492109 api_server.go:72] duration metric: took 6.909070769s to wait for apiserver process to appear ...
	I1020 13:23:08.422374  492109 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:23:08.422395  492109 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:23:08.435461  492109 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:23:08.436744  492109 api_server.go:141] control plane version: v1.34.1
	I1020 13:23:08.436784  492109 api_server.go:131] duration metric: took 14.401642ms to wait for apiserver health ...
	I1020 13:23:08.436794  492109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:23:08.441618  492109 system_pods.go:59] 8 kube-system pods found
	I1020 13:23:08.441667  492109 system_pods.go:61] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:23:08.441676  492109 system_pods.go:61] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:23:08.441691  492109 system_pods.go:61] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:23:08.441723  492109 system_pods.go:61] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:23:08.441736  492109 system_pods.go:61] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:23:08.441742  492109 system_pods.go:61] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:23:08.441749  492109 system_pods.go:61] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:23:08.441765  492109 system_pods.go:61] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Running
	I1020 13:23:08.441771  492109 system_pods.go:74] duration metric: took 4.971987ms to wait for pod list to return data ...
	I1020 13:23:08.441783  492109 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:23:08.445203  492109 default_sa.go:45] found service account: "default"
	I1020 13:23:08.445239  492109 default_sa.go:55] duration metric: took 3.449137ms for default service account to be created ...
	I1020 13:23:08.445249  492109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:23:08.448648  492109 system_pods.go:86] 8 kube-system pods found
	I1020 13:23:08.448690  492109 system_pods.go:89] "coredns-66bc5c9577-9hxmm" [b9d863c1-b71a-470d-90fd-47fa59ace32e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:23:08.448700  492109 system_pods.go:89] "etcd-embed-certs-979197" [a6f1c158-6bb5-4a9d-a7a7-5d81b68eb607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:23:08.448707  492109 system_pods.go:89] "kindnet-jzxdn" [84729d76-950b-4e09-a264-1b61ffedaac7] Running
	I1020 13:23:08.448715  492109 system_pods.go:89] "kube-apiserver-embed-certs-979197" [d44cd3ed-5d34-4e63-a343-02f8ee61e1ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:23:08.448731  492109 system_pods.go:89] "kube-controller-manager-embed-certs-979197" [5728b049-5b3a-4a1d-af9c-25503367f080] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:23:08.448741  492109 system_pods.go:89] "kube-proxy-gf2bz" [d204f6c2-319e-4a08-96ad-a9e789c40df8] Running
	I1020 13:23:08.448758  492109 system_pods.go:89] "kube-scheduler-embed-certs-979197" [c36dd9aa-1984-40af-89be-67c4d66c0da6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:23:08.448768  492109 system_pods.go:89] "storage-provisioner" [8b66a916-769c-48f7-a28b-948022299e8e] Running
	I1020 13:23:08.448776  492109 system_pods.go:126] duration metric: took 3.520851ms to wait for k8s-apps to be running ...
	I1020 13:23:08.448787  492109 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:23:08.448852  492109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:23:08.470222  492109 system_svc.go:56] duration metric: took 21.425889ms WaitForService to wait for kubelet
	I1020 13:23:08.470251  492109 kubeadm.go:586] duration metric: took 6.956986721s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:23:08.470282  492109 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:23:08.473481  492109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:23:08.473553  492109 node_conditions.go:123] node cpu capacity is 2
	I1020 13:23:08.473581  492109 node_conditions.go:105] duration metric: took 3.29176ms to run NodePressure ...
	I1020 13:23:08.473606  492109 start.go:241] waiting for startup goroutines ...
	I1020 13:23:08.473643  492109 start.go:246] waiting for cluster config update ...
	I1020 13:23:08.473676  492109 start.go:255] writing updated cluster config ...
	I1020 13:23:08.474013  492109 ssh_runner.go:195] Run: rm -f paused
	I1020 13:23:08.477761  492109 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:23:08.481881  492109 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9hxmm" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 13:23:10.496746  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:12.988161  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:14.988428  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:17.489316  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:19.987651  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:21.990430  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:24.488404  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:26.989311  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.166903946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c06f233c-0e2b-423d-be8f-7ce72b787b6a name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.168135987Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a1d7f430-babe-41ea-8a40-76dae6528f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.168254831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.178818794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.192611417Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f1e6d6089ddd038c2deb2f59237d78c4eb261b2e90468820b55ac95aa15d3cce/merged/etc/passwd: no such file or directory"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.192694019Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f1e6d6089ddd038c2deb2f59237d78c4eb261b2e90468820b55ac95aa15d3cce/merged/etc/group: no such file or directory"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.193222658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.23457849Z" level=info msg="Created container 863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d: kube-system/storage-provisioner/storage-provisioner" id=a1d7f430-babe-41ea-8a40-76dae6528f2a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.235643661Z" level=info msg="Starting container: 863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d" id=61491660-1a50-4828-a3b0-b8a0477ed3af name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:23:01 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:01.238737191Z" level=info msg="Started container" PID=1634 containerID=863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d description=kube-system/storage-provisioner/storage-provisioner id=61491660-1a50-4828-a3b0-b8a0477ed3af name=/runtime.v1.RuntimeService/StartContainer sandboxID=83ffdf4fee19656a187926f87ff33c9a3797027fc5e31c6a1eb073791c3ccc44
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.730624524Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.734804427Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.734958866Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.735036504Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.740525454Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.740713182Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.74079347Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.74445202Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.744486827Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.744513346Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.749912055Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.749948125Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.74996895Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.756864589Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:10 default-k8s-diff-port-794175 crio[646]: time="2025-10-20T13:23:10.756899757Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	863af67c2dcab       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   83ffdf4fee196       storage-provisioner                                    kube-system
	025be26ce1b35       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   5a4e3c60352bd       dashboard-metrics-scraper-6ffb444bf9-nzzsl             kubernetes-dashboard
	0d4585e869c64       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   9e5a0acd70164       kubernetes-dashboard-855c9754f9-spstf                  kubernetes-dashboard
	295969f69655a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   8f717c96ad568       coredns-66bc5c9577-fgxwg                               kube-system
	24a62685df217       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   2b5872d902bbc       busybox                                                default
	6c168c94f962c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   633da3c6f3067       kube-proxy-jkb75                                       kube-system
	c5b36c984daad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   bea2eda7425c2       kindnet-9w4q8                                          kube-system
	c2da31ffb2988       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   83ffdf4fee196       storage-provisioner                                    kube-system
	a1f57d1b86d10       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   d4408c52ccceb       kube-controller-manager-default-k8s-diff-port-794175   kube-system
	9d5c53a7bdae3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   70b746c834d7c       kube-apiserver-default-k8s-diff-port-794175            kube-system
	096f1cd30b37c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   e43dba1df4b7b       kube-scheduler-default-k8s-diff-port-794175            kube-system
	56b7c71f81efc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2756e421af61c       etcd-default-k8s-diff-port-794175                      kube-system
	
	
	==> coredns [295969f69655a7f6680e1af4de2531d515ee76538364168b72643bc1c0e2555c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54007 - 43100 "HINFO IN 7076610121752415219.6253543287017802264. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01564955s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-794175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-794175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=default-k8s-diff-port-794175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_21_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:20:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-794175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:23:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:20:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:23:00 +0000   Mon, 20 Oct 2025 13:21:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-794175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e9dbb7f7-719c-4a64-84f6-74d2f47cffc5
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-fgxwg                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-794175                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m28s
	  kube-system                 kindnet-9w4q8                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-794175             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-794175    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-jkb75                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-794175             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nzzsl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-spstf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m21s              kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m28s              kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m28s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s              kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s              kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m28s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m24s              node-controller  Node default-k8s-diff-port-794175 event: Registered Node default-k8s-diff-port-794175 in Controller
	  Normal   NodeReady                102s               kubelet          Node default-k8s-diff-port-794175 status is now: NodeReady
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 67s)  kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 67s)  kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 67s)  kubelet          Node default-k8s-diff-port-794175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-794175 event: Registered Node default-k8s-diff-port-794175 in Controller
	
	
	==> dmesg <==
	[Oct20 12:59] overlayfs: idmapped layers are currently not supported
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [56b7c71f81efc16edacd521e6aae411626e76d228a65e9add6a6a338fa9c8438] <==
	{"level":"warn","ts":"2025-10-20T13:22:27.991696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.008732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.033158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.056771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.067371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.084723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.098314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.115963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.132139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.152083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.166742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.182811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.204626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.219971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.235909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.252502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.278223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.303994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.319713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.334443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.356519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.384229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.398807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.414165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:22:28.465802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45560","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:23:30 up  3:06,  0 user,  load average: 2.56, 2.64, 2.47
	Linux default-k8s-diff-port-794175 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c5b36c984daad909967c8cc642e55e8e4193c1aa95b8708ae59d7ddad6f2d075] <==
	I1020 13:22:30.531487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:22:30.603046       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:22:30.603206       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:22:30.603219       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:22:30.603230       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:22:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:22:30.729955       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:22:30.729972       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:22:30.729980       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:22:30.730101       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:23:00.733872       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:23:00.733987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:23:00.734008       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:23:00.734069       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1020 13:23:02.330247       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:23:02.330403       1 metrics.go:72] Registering metrics
	I1020 13:23:02.330525       1 controller.go:711] "Syncing nftables rules"
	I1020 13:23:10.730198       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:23:10.730365       1 main.go:301] handling current node
	I1020 13:23:20.736438       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:23:20.736475       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d5c53a7bdae3f025044a87f8c5d2e1b320b8ceedb2b698caa614049aa2ebc06] <==
	I1020 13:22:29.527254       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 13:22:29.527285       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 13:22:29.527567       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 13:22:29.527745       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:22:29.527793       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:22:29.548022       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:22:29.552124       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 13:22:29.552492       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 13:22:29.556681       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:22:29.561251       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1020 13:22:29.561701       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:22:29.569393       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:22:29.570967       1 cache.go:39] Caches are synced for autoregister controller
	E1020 13:22:29.614855       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:22:30.062274       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:22:30.161058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:22:30.699837       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:22:30.768745       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:22:30.819945       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:22:30.837440       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:22:30.951404       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.224.15"}
	I1020 13:22:30.980326       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.64.37"}
	I1020 13:22:32.814326       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:22:33.163989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:22:33.362856       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a1f57d1b86d10e16e97306a3d10e424a14e07532b8216a6771718f9c926ae56d] <==
	I1020 13:22:32.813866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:22:32.813942       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 13:22:32.816477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:22:32.816635       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1020 13:22:32.820263       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 13:22:32.822650       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1020 13:22:32.824454       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:22:32.826927       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 13:22:32.828113       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 13:22:32.829235       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 13:22:32.830359       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:22:32.832587       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 13:22:32.835213       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:22:32.844502       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:22:32.856236       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:22:32.856249       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 13:22:32.856283       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:22:32.856512       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:22:32.856437       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 13:22:32.856614       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:22:32.856754       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-794175"
	I1020 13:22:32.856834       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 13:22:32.857398       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:22:32.858946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:22:32.866174       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [6c168c94f962c234b865074198edbc08aad0b0769ae1e96069c37ca5c002d8fe] <==
	I1020 13:22:30.797047       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:22:30.910540       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:22:31.015475       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:22:31.015530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:22:31.015606       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:22:31.132320       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:22:31.132468       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:22:31.148677       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:22:31.149133       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:22:31.149389       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:22:31.150822       1 config.go:200] "Starting service config controller"
	I1020 13:22:31.150946       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:22:31.151004       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:22:31.151033       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:22:31.151049       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:22:31.151053       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:22:31.154557       1 config.go:309] "Starting node config controller"
	I1020 13:22:31.162848       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:22:31.162934       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:22:31.251286       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:22:31.251287       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:22:31.251313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [096f1cd30b37ce6efa7756c97e11d57278a6e55b13f1e328c2db6254d6777462] <==
	I1020 13:22:27.516482       1 serving.go:386] Generated self-signed cert in-memory
	W1020 13:22:29.398928       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 13:22:29.398955       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 13:22:29.398965       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 13:22:29.398972       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 13:22:29.566990       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:22:29.577839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:22:29.580756       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:22:29.583844       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:22:29.583886       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:22:29.583905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:22:29.684665       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:22:33 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:33.588272     773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww5qm\" (UniqueName: \"kubernetes.io/projected/a094196e-f7e4-45b1-9a0a-72749b039ea4-kube-api-access-ww5qm\") pod \"dashboard-metrics-scraper-6ffb444bf9-nzzsl\" (UID: \"a094196e-f7e4-45b1-9a0a-72749b039ea4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl"
	Oct 20 13:22:33 default-k8s-diff-port-794175 kubelet[773]: W1020 13:22:33.781408     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-5a4e3c60352bd8f827498b7782b144b02c667cad2c9a77b20fbf72df8cd4fc36 WatchSource:0}: Error finding container 5a4e3c60352bd8f827498b7782b144b02c667cad2c9a77b20fbf72df8cd4fc36: Status 404 returned error can't find the container with id 5a4e3c60352bd8f827498b7782b144b02c667cad2c9a77b20fbf72df8cd4fc36
	Oct 20 13:22:33 default-k8s-diff-port-794175 kubelet[773]: W1020 13:22:33.799137     773 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a83c39bdcf1cd13acc5a615f226245cd7d42451da730826cc6e50caeb79fd9e4/crio-9e5a0acd701649be3f66c3ac221e182e3376b662c61f3f8deb8307fceb84f80b WatchSource:0}: Error finding container 9e5a0acd701649be3f66c3ac221e182e3376b662c61f3f8deb8307fceb84f80b: Status 404 returned error can't find the container with id 9e5a0acd701649be3f66c3ac221e182e3376b662c61f3f8deb8307fceb84f80b
	Oct 20 13:22:39 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:39.076449     773 scope.go:117] "RemoveContainer" containerID="dc884a6f38fedde045726b07bfa831f8071310f5ca97a59a5f2a23c3c35a9d4c"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:40.051594     773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:40.088605     773 scope.go:117] "RemoveContainer" containerID="dc884a6f38fedde045726b07bfa831f8071310f5ca97a59a5f2a23c3c35a9d4c"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:40.089606     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:40 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:40.090131     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:22:41 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:41.092593     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:41 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:41.092758     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:22:44 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:44.764260     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:44 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:44.764514     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:22:45 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:45.147262     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-spstf" podStartSLOduration=1.603767429 podStartE2EDuration="12.147239854s" podCreationTimestamp="2025-10-20 13:22:33 +0000 UTC" firstStartedPulling="2025-10-20 13:22:33.802737972 +0000 UTC m=+10.111748251" lastFinishedPulling="2025-10-20 13:22:44.346210405 +0000 UTC m=+20.655220676" observedRunningTime="2025-10-20 13:22:45.146575895 +0000 UTC m=+21.455586174" watchObservedRunningTime="2025-10-20 13:22:45.147239854 +0000 UTC m=+21.456250133"
	Oct 20 13:22:56 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:56.899027     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:57 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:57.135301     773 scope.go:117] "RemoveContainer" containerID="3b6fedc0ab8a1a36352805f441b4ce331c5debb61a710836efe505d3b6f2b399"
	Oct 20 13:22:57 default-k8s-diff-port-794175 kubelet[773]: I1020 13:22:57.135698     773 scope.go:117] "RemoveContainer" containerID="025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	Oct 20 13:22:57 default-k8s-diff-port-794175 kubelet[773]: E1020 13:22:57.136042     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:23:01 default-k8s-diff-port-794175 kubelet[773]: I1020 13:23:01.164273     773 scope.go:117] "RemoveContainer" containerID="c2da31ffb29883c5a509e1c067e67e14ad811168ec5c9dcca77b3fd063fead17"
	Oct 20 13:23:04 default-k8s-diff-port-794175 kubelet[773]: I1020 13:23:04.764632     773 scope.go:117] "RemoveContainer" containerID="025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	Oct 20 13:23:04 default-k8s-diff-port-794175 kubelet[773]: E1020 13:23:04.765268     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:23:16 default-k8s-diff-port-794175 kubelet[773]: I1020 13:23:16.899383     773 scope.go:117] "RemoveContainer" containerID="025be26ce1b35a56173c367799986e46708e6d70a24e4248ac2f5cd17acd90f9"
	Oct 20 13:23:16 default-k8s-diff-port-794175 kubelet[773]: E1020 13:23:16.899571     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nzzsl_kubernetes-dashboard(a094196e-f7e4-45b1-9a0a-72749b039ea4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nzzsl" podUID="a094196e-f7e4-45b1-9a0a-72749b039ea4"
	Oct 20 13:23:25 default-k8s-diff-port-794175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:23:25 default-k8s-diff-port-794175 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:23:25 default-k8s-diff-port-794175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0d4585e869c6427f8e929cf0f1676242d1b0d51446bc8b9735b6c931aee3d98d] <==
	2025/10/20 13:22:44 Starting overwatch
	2025/10/20 13:22:44 Using namespace: kubernetes-dashboard
	2025/10/20 13:22:44 Using in-cluster config to connect to apiserver
	2025/10/20 13:22:44 Using secret token for csrf signing
	2025/10/20 13:22:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:22:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:22:44 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 13:22:44 Generating JWE encryption key
	2025/10/20 13:22:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:22:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:22:44 Initializing JWE encryption key from synchronized object
	2025/10/20 13:22:44 Creating in-cluster Sidecar client
	2025/10/20 13:22:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:22:44 Serving insecurely on HTTP port: 9090
	2025/10/20 13:23:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [863af67c2dcab85f3b11efae3d0633c6ea8c8415b1925de7ee8518819e2e6a0d] <==
	I1020 13:23:01.325996       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:23:01.326042       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:23:01.328537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:04.785581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:09.046052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:12.644929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:15.698200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:18.720261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:18.725332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:18.725497       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:23:18.725659       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-794175_661e2458-9d9c-466e-9c8f-f14a92ade907!
	I1020 13:23:18.726570       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"64224ef5-8dba-4cbf-9a3f-49d2b765cfef", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-794175_661e2458-9d9c-466e-9c8f-f14a92ade907 became leader
	W1020 13:23:18.730020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:18.739531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:18.826552       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-794175_661e2458-9d9c-466e-9c8f-f14a92ade907!
	W1020 13:23:20.742877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:20.754836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:22.758013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:22.762917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:24.766375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:24.844611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:26.848309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:26.854917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:28.858845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:28.863723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c2da31ffb29883c5a509e1c067e67e14ad811168ec5c9dcca77b3fd063fead17] <==
	I1020 13:22:30.390750       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:23:00.393382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175: exit status 2 (371.470672ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-794175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-979197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-979197 --alsologtostderr -v=1: exit status 80 (2.255171355s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-979197 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:24:02.444765  498346 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:24:02.444952  498346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:24:02.444978  498346 out.go:374] Setting ErrFile to fd 2...
	I1020 13:24:02.444996  498346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:24:02.445269  498346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:24:02.445552  498346 out.go:368] Setting JSON to false
	I1020 13:24:02.445597  498346 mustload.go:65] Loading cluster: embed-certs-979197
	I1020 13:24:02.446008  498346 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:24:02.446491  498346 cli_runner.go:164] Run: docker container inspect embed-certs-979197 --format={{.State.Status}}
	I1020 13:24:02.469952  498346 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:24:02.470331  498346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:24:02.560486  498346 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:78 SystemTime:2025-10-20 13:24:02.550194162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:24:02.561133  498346 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-979197 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 13:24:02.564788  498346 out.go:179] * Pausing node embed-certs-979197 ... 
	I1020 13:24:02.567656  498346 host.go:66] Checking if "embed-certs-979197" exists ...
	I1020 13:24:02.568000  498346 ssh_runner.go:195] Run: systemctl --version
	I1020 13:24:02.568052  498346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-979197
	I1020 13:24:02.606899  498346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/embed-certs-979197/id_rsa Username:docker}
	I1020 13:24:02.710921  498346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:24:02.725058  498346 pause.go:52] kubelet running: true
	I1020 13:24:02.725130  498346 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:24:03.011196  498346 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:24:03.011294  498346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:24:03.109635  498346 cri.go:89] found id: "a04ddd0eaf35fc812a2c522888d81d485b2f3a10f3d187d544c6d233a6aec6e0"
	I1020 13:24:03.109675  498346 cri.go:89] found id: "a05f9f7328a594e26d4af40388b3fa293e0634a69f2ae56945b323b31b65e515"
	I1020 13:24:03.109681  498346 cri.go:89] found id: "e9a76dd7d82fef4db24bc979f481c627369eff8a5527c155e7498370b3f8a2c7"
	I1020 13:24:03.109685  498346 cri.go:89] found id: "ec4f1b2741d1bcf1038b63753cb9726f96d12224de84119b14ed7d03a8e887da"
	I1020 13:24:03.109689  498346 cri.go:89] found id: "b06d2e7205597130313084e1717d17e5b507cae70710ab71067333cf26a81bff"
	I1020 13:24:03.109693  498346 cri.go:89] found id: "e584e506b7520e3f3fc6c5efbd25f505db7a034d9a0b978b8af3a90afb94f84b"
	I1020 13:24:03.109714  498346 cri.go:89] found id: "fdf35c27cf71e3c6a3b8814a9f32bced0ae742f30f72aff6760a85b4a3a7145b"
	I1020 13:24:03.109731  498346 cri.go:89] found id: "aa8e7b9b68af423d774d170b1c024dba6f7323fa1d41441cd1e8ee87d1cd0140"
	I1020 13:24:03.109736  498346 cri.go:89] found id: "631a35129ac4de8ec7ce893c70fd5f816fb79609c9e434d0fb0f0fad3f58552b"
	I1020 13:24:03.109744  498346 cri.go:89] found id: "626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190"
	I1020 13:24:03.109753  498346 cri.go:89] found id: "9f363f84681d1a5440f5360011859037100e536b500edc9635f8c9c0b5efa08f"
	I1020 13:24:03.109756  498346 cri.go:89] found id: ""
	I1020 13:24:03.109824  498346 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:24:03.125277  498346 retry.go:31] will retry after 214.249794ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:24:03Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:24:03.340735  498346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:24:03.355523  498346 pause.go:52] kubelet running: false
	I1020 13:24:03.355588  498346 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:24:03.582983  498346 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:24:03.583072  498346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:24:03.714211  498346 cri.go:89] found id: "a04ddd0eaf35fc812a2c522888d81d485b2f3a10f3d187d544c6d233a6aec6e0"
	I1020 13:24:03.714238  498346 cri.go:89] found id: "a05f9f7328a594e26d4af40388b3fa293e0634a69f2ae56945b323b31b65e515"
	I1020 13:24:03.714244  498346 cri.go:89] found id: "e9a76dd7d82fef4db24bc979f481c627369eff8a5527c155e7498370b3f8a2c7"
	I1020 13:24:03.714248  498346 cri.go:89] found id: "ec4f1b2741d1bcf1038b63753cb9726f96d12224de84119b14ed7d03a8e887da"
	I1020 13:24:03.714251  498346 cri.go:89] found id: "b06d2e7205597130313084e1717d17e5b507cae70710ab71067333cf26a81bff"
	I1020 13:24:03.714255  498346 cri.go:89] found id: "e584e506b7520e3f3fc6c5efbd25f505db7a034d9a0b978b8af3a90afb94f84b"
	I1020 13:24:03.714259  498346 cri.go:89] found id: "fdf35c27cf71e3c6a3b8814a9f32bced0ae742f30f72aff6760a85b4a3a7145b"
	I1020 13:24:03.714263  498346 cri.go:89] found id: "aa8e7b9b68af423d774d170b1c024dba6f7323fa1d41441cd1e8ee87d1cd0140"
	I1020 13:24:03.714266  498346 cri.go:89] found id: "631a35129ac4de8ec7ce893c70fd5f816fb79609c9e434d0fb0f0fad3f58552b"
	I1020 13:24:03.714273  498346 cri.go:89] found id: "626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190"
	I1020 13:24:03.714276  498346 cri.go:89] found id: "9f363f84681d1a5440f5360011859037100e536b500edc9635f8c9c0b5efa08f"
	I1020 13:24:03.714280  498346 cri.go:89] found id: ""
	I1020 13:24:03.714331  498346 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:24:03.725917  498346 retry.go:31] will retry after 471.928467ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:24:03Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:24:04.198323  498346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:24:04.217540  498346 pause.go:52] kubelet running: false
	I1020 13:24:04.217606  498346 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:24:04.469546  498346 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:24:04.469647  498346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:24:04.577279  498346 cri.go:89] found id: "a04ddd0eaf35fc812a2c522888d81d485b2f3a10f3d187d544c6d233a6aec6e0"
	I1020 13:24:04.577310  498346 cri.go:89] found id: "a05f9f7328a594e26d4af40388b3fa293e0634a69f2ae56945b323b31b65e515"
	I1020 13:24:04.577319  498346 cri.go:89] found id: "e9a76dd7d82fef4db24bc979f481c627369eff8a5527c155e7498370b3f8a2c7"
	I1020 13:24:04.577323  498346 cri.go:89] found id: "ec4f1b2741d1bcf1038b63753cb9726f96d12224de84119b14ed7d03a8e887da"
	I1020 13:24:04.577330  498346 cri.go:89] found id: "b06d2e7205597130313084e1717d17e5b507cae70710ab71067333cf26a81bff"
	I1020 13:24:04.577335  498346 cri.go:89] found id: "e584e506b7520e3f3fc6c5efbd25f505db7a034d9a0b978b8af3a90afb94f84b"
	I1020 13:24:04.577338  498346 cri.go:89] found id: "fdf35c27cf71e3c6a3b8814a9f32bced0ae742f30f72aff6760a85b4a3a7145b"
	I1020 13:24:04.577345  498346 cri.go:89] found id: "aa8e7b9b68af423d774d170b1c024dba6f7323fa1d41441cd1e8ee87d1cd0140"
	I1020 13:24:04.577348  498346 cri.go:89] found id: "631a35129ac4de8ec7ce893c70fd5f816fb79609c9e434d0fb0f0fad3f58552b"
	I1020 13:24:04.577355  498346 cri.go:89] found id: "626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190"
	I1020 13:24:04.577363  498346 cri.go:89] found id: "9f363f84681d1a5440f5360011859037100e536b500edc9635f8c9c0b5efa08f"
	I1020 13:24:04.577367  498346 cri.go:89] found id: ""
	I1020 13:24:04.577433  498346 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:24:04.610989  498346 out.go:203] 
	W1020 13:24:04.613877  498346 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:24:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:24:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 13:24:04.613927  498346 out.go:285] * 
	* 
	W1020 13:24:04.622361  498346 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 13:24:04.627351  498346 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-979197 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-979197
helpers_test.go:243: (dbg) docker inspect embed-certs-979197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b",
	        "Created": "2025-10-20T13:21:40.070634794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:22:53.072175178Z",
	            "FinishedAt": "2025-10-20T13:22:51.912590863Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/hosts",
	        "LogPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b-json.log",
	        "Name": "/embed-certs-979197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-979197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-979197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b",
	                "LowerDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-979197",
	                "Source": "/var/lib/docker/volumes/embed-certs-979197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-979197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-979197",
	                "name.minikube.sigs.k8s.io": "embed-certs-979197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00ccb30000ea445505461e620ff7e0776cc8a39cc12c6b9ab591d8ad61cc34fa",
	            "SandboxKey": "/var/run/docker/netns/00ccb30000ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-979197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:50:9d:09:dd:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bde21224527a25cf82271eb68321115d5ca91f933b235b8b28a8c48a7e3f01e5",
	                    "EndpointID": "0da221d429abff228e3d4f206f0ed21dc626b4b4bd1f8873719e216785a9e8c6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-979197",
	                        "737cd86e9d78"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197
E1020 13:24:04.933079  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197: exit status 2 (484.63566ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-979197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-979197 logs -n 25: (1.632825182s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:21 UTC │
	│ delete  │ -p cert-expiration-066011                                                                                                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:21 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:23:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:23:34.266911  495732 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:23:34.267076  495732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:23:34.267099  495732 out.go:374] Setting ErrFile to fd 2...
	I1020 13:23:34.267117  495732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:23:34.267400  495732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:23:34.268872  495732 out.go:368] Setting JSON to false
	I1020 13:23:34.270109  495732 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11165,"bootTime":1760955450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:23:34.270221  495732 start.go:141] virtualization:  
	I1020 13:23:34.275937  495732 out.go:179] * [no-preload-744804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:23:34.279134  495732 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:23:34.279201  495732 notify.go:220] Checking for updates...
	I1020 13:23:34.285170  495732 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:23:34.288137  495732 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:23:34.291102  495732 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:23:34.293984  495732 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:23:34.296910  495732 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:23:34.300445  495732 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:23:34.300579  495732 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:23:34.328469  495732 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:23:34.328592  495732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:23:34.392901  495732 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:23:34.382836678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:23:34.393004  495732 docker.go:318] overlay module found
	I1020 13:23:34.396316  495732 out.go:179] * Using the docker driver based on user configuration
	I1020 13:23:34.399193  495732 start.go:305] selected driver: docker
	I1020 13:23:34.399215  495732 start.go:925] validating driver "docker" against <nil>
	I1020 13:23:34.399242  495732 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:23:34.400036  495732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:23:34.460941  495732 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:23:34.451563958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:23:34.461116  495732 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 13:23:34.461341  495732 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:23:34.464345  495732 out.go:179] * Using Docker driver with root privileges
	I1020 13:23:34.467197  495732 cni.go:84] Creating CNI manager for ""
	I1020 13:23:34.467280  495732 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:23:34.467293  495732 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:23:34.467372  495732 start.go:349] cluster config:
	{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:23:34.470393  495732 out.go:179] * Starting "no-preload-744804" primary control-plane node in "no-preload-744804" cluster
	I1020 13:23:34.473205  495732 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:23:34.475963  495732 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:23:34.478771  495732 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:23:34.478868  495732 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:23:34.478952  495732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:23:34.478987  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json: {Name:mkd1d2b9e52656dca22053032defc126d51cb142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:34.481296  495732 cache.go:107] acquiring lock: {Name:mk2466d3c957a995adbebbabeab0fa3cc60b0749 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.481429  495732 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1020 13:23:34.481481  495732 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.633769ms
	I1020 13:23:34.481496  495732 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1020 13:23:34.481515  495732 cache.go:107] acquiring lock: {Name:mk91e48e01c9d742f280bc2f9044086cb15ac8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.482365  495732 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:34.482882  495732 cache.go:107] acquiring lock: {Name:mk06b7edc57ee881bc4af5e7d1c0bb5270ebff49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.483023  495732 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:34.483293  495732 cache.go:107] acquiring lock: {Name:mk1d0a9075d8d12111d126a101053db6ac0a7b69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.483412  495732 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:34.483634  495732 cache.go:107] acquiring lock: {Name:mk2f501eec0d7af6312aef6efa1f5bbad5f4d684 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.485281  495732 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:34.485526  495732 cache.go:107] acquiring lock: {Name:mk76c9e0dd61216d0c0ba53e6cfb9cbe19ddfd70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.485617  495732 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1020 13:23:34.485790  495732 cache.go:107] acquiring lock: {Name:mkd8eb3de224a6da14efa26f40075e815e71b6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.485865  495732 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:34.486026  495732 cache.go:107] acquiring lock: {Name:mkf695cbf431ff83306d5e1211f07fc194d769c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.486103  495732 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:34.489939  495732 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:34.491245  495732 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:34.491407  495732 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1020 13:23:34.491552  495732 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:34.491683  495732 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:34.492324  495732 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:34.492748  495732 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:34.506507  495732 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:23:34.506530  495732 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:23:34.506549  495732 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:23:34.506593  495732 start.go:360] acquireMachinesLock for no-preload-744804: {Name:mk60261f5e12334720a2e0b8e33ce6265dbb09b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.506721  495732 start.go:364] duration metric: took 105.487µs to acquireMachinesLock for "no-preload-744804"
	I1020 13:23:34.506753  495732 start.go:93] Provisioning new machine with config: &{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:23:34.506826  495732 start.go:125] createHost starting for "" (driver="docker")
	W1020 13:23:33.988847  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:36.488946  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	I1020 13:23:34.510516  495732 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 13:23:34.510750  495732 start.go:159] libmachine.API.Create for "no-preload-744804" (driver="docker")
	I1020 13:23:34.510791  495732 client.go:168] LocalClient.Create starting
	I1020 13:23:34.510871  495732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 13:23:34.510909  495732 main.go:141] libmachine: Decoding PEM data...
	I1020 13:23:34.510926  495732 main.go:141] libmachine: Parsing certificate...
	I1020 13:23:34.510985  495732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 13:23:34.511011  495732 main.go:141] libmachine: Decoding PEM data...
	I1020 13:23:34.511027  495732 main.go:141] libmachine: Parsing certificate...
	I1020 13:23:34.511429  495732 cli_runner.go:164] Run: docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 13:23:34.537560  495732 cli_runner.go:211] docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 13:23:34.537647  495732 network_create.go:284] running [docker network inspect no-preload-744804] to gather additional debugging logs...
	I1020 13:23:34.537671  495732 cli_runner.go:164] Run: docker network inspect no-preload-744804
	W1020 13:23:34.554942  495732 cli_runner.go:211] docker network inspect no-preload-744804 returned with exit code 1
	I1020 13:23:34.554975  495732 network_create.go:287] error running [docker network inspect no-preload-744804]: docker network inspect no-preload-744804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-744804 not found
	I1020 13:23:34.554989  495732 network_create.go:289] output of [docker network inspect no-preload-744804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-744804 not found
	
	** /stderr **
	I1020 13:23:34.555093  495732 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:23:34.571357  495732 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31214b196961 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:99:57:10:1b:40} reservation:<nil>}
	I1020 13:23:34.571646  495732 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf6e9e751b4a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:0d:2b:68:24:bc} reservation:<nil>}
	I1020 13:23:34.572003  495732 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-076921d0625d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:c5:51:b1:3d:0c} reservation:<nil>}
	I1020 13:23:34.572487  495732 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c42e20}
	I1020 13:23:34.572515  495732 network_create.go:124] attempt to create docker network no-preload-744804 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1020 13:23:34.572573  495732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-744804 no-preload-744804
	I1020 13:23:34.651144  495732 network_create.go:108] docker network no-preload-744804 192.168.76.0/24 created
	I1020 13:23:34.651176  495732 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-744804" container
	I1020 13:23:34.651255  495732 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 13:23:34.670420  495732 cli_runner.go:164] Run: docker volume create no-preload-744804 --label name.minikube.sigs.k8s.io=no-preload-744804 --label created_by.minikube.sigs.k8s.io=true
	I1020 13:23:34.688114  495732 oci.go:103] Successfully created a docker volume no-preload-744804
	I1020 13:23:34.688202  495732 cli_runner.go:164] Run: docker run --rm --name no-preload-744804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-744804 --entrypoint /usr/bin/test -v no-preload-744804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 13:23:34.871927  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1020 13:23:34.920723  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1020 13:23:34.928759  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1020 13:23:34.956150  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1020 13:23:34.963962  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1020 13:23:34.996644  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1020 13:23:35.008742  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1020 13:23:35.016718  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1020 13:23:35.016752  495732 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 531.229706ms
	I1020 13:23:35.016766  495732 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1020 13:23:35.298558  495732 oci.go:107] Successfully prepared a docker volume no-preload-744804
	I1020 13:23:35.298591  495732 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1020 13:23:35.298724  495732 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 13:23:35.298853  495732 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 13:23:35.379303  495732 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-744804 --name no-preload-744804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-744804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-744804 --network no-preload-744804 --ip 192.168.76.2 --volume no-preload-744804:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 13:23:35.504899  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1020 13:23:35.504974  495732 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.021342376s
	I1020 13:23:35.505001  495732 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1020 13:23:35.783104  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Running}}
	I1020 13:23:35.875630  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:23:35.917812  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1020 13:23:35.917845  495732 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.434547285s
	I1020 13:23:35.917857  495732 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1020 13:23:35.941814  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1020 13:23:35.941889  495732 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.455868633s
	I1020 13:23:35.943180  495732 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1020 13:23:35.950084  495732 cli_runner.go:164] Run: docker exec no-preload-744804 stat /var/lib/dpkg/alternatives/iptables
	I1020 13:23:35.990581  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1020 13:23:35.990619  495732 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.507734972s
	I1020 13:23:35.990632  495732 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1020 13:23:36.061968  495732 oci.go:144] the created container "no-preload-744804" has a running status.
	I1020 13:23:36.061994  495732 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa...
	I1020 13:23:36.105396  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1020 13:23:36.105467  495732 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.623952624s
	I1020 13:23:36.105492  495732 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1020 13:23:36.905674  495732 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 13:23:36.928178  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:23:36.945799  495732 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 13:23:36.945835  495732 kic_runner.go:114] Args: [docker exec --privileged no-preload-744804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 13:23:37.012898  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:23:37.036497  495732 machine.go:93] provisionDockerMachine start ...
	I1020 13:23:37.036625  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:37.055486  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:37.055830  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:37.055840  495732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:23:37.056529  495732 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:23:37.258280  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1020 13:23:37.258310  495732 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.772523351s
	I1020 13:23:37.258324  495732 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1020 13:23:37.258368  495732 cache.go:87] Successfully saved all images to host disk.
	W1020 13:23:38.987564  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:40.988115  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	I1020 13:23:40.208143  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:23:40.208170  495732 ubuntu.go:182] provisioning hostname "no-preload-744804"
	I1020 13:23:40.208235  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.225913  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:40.226230  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:40.226246  495732 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744804 && echo "no-preload-744804" | sudo tee /etc/hostname
	I1020 13:23:40.390662  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:23:40.390741  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.408727  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:40.409026  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:40.409047  495732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744804/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:23:40.557799  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:23:40.557832  495732 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:23:40.557865  495732 ubuntu.go:190] setting up certificates
	I1020 13:23:40.557876  495732 provision.go:84] configureAuth start
	I1020 13:23:40.557948  495732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:23:40.575540  495732 provision.go:143] copyHostCerts
	I1020 13:23:40.575616  495732 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:23:40.575631  495732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:23:40.575714  495732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:23:40.575820  495732 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:23:40.575831  495732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:23:40.575859  495732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:23:40.575925  495732 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:23:40.575932  495732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:23:40.575957  495732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:23:40.576019  495732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.no-preload-744804 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-744804]
	I1020 13:23:40.724694  495732 provision.go:177] copyRemoteCerts
	I1020 13:23:40.724767  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:23:40.724810  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.741481  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:40.845064  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:23:40.864462  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:23:40.882009  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:23:40.899303  495732 provision.go:87] duration metric: took 341.410966ms to configureAuth
	I1020 13:23:40.899327  495732 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:23:40.899522  495732 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:23:40.899630  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.917590  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:40.917896  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:40.917916  495732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:23:41.285682  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:23:41.285703  495732 machine.go:96] duration metric: took 4.249180387s to provisionDockerMachine
	I1020 13:23:41.285713  495732 client.go:171] duration metric: took 6.774909558s to LocalClient.Create
	I1020 13:23:41.285728  495732 start.go:167] duration metric: took 6.774979401s to libmachine.API.Create "no-preload-744804"
	I1020 13:23:41.285734  495732 start.go:293] postStartSetup for "no-preload-744804" (driver="docker")
	I1020 13:23:41.285744  495732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:23:41.285806  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:23:41.285851  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.307575  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.412687  495732 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:23:41.416038  495732 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:23:41.416123  495732 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:23:41.416135  495732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:23:41.416215  495732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:23:41.416344  495732 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:23:41.416480  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:23:41.423920  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:23:41.442199  495732 start.go:296] duration metric: took 156.449501ms for postStartSetup
	I1020 13:23:41.442562  495732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:23:41.460970  495732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:23:41.461264  495732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:23:41.461322  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.477875  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.581915  495732 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:23:41.586709  495732 start.go:128] duration metric: took 7.079861747s to createHost
	I1020 13:23:41.586731  495732 start.go:83] releasing machines lock for "no-preload-744804", held for 7.079995395s
	I1020 13:23:41.586818  495732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:23:41.603679  495732 ssh_runner.go:195] Run: cat /version.json
	I1020 13:23:41.603709  495732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:23:41.603734  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.603777  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.622051  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.624868  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.724000  495732 ssh_runner.go:195] Run: systemctl --version
	I1020 13:23:41.835590  495732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:23:41.872754  495732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:23:41.877126  495732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:23:41.877207  495732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:23:41.909121  495732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 13:23:41.909197  495732 start.go:495] detecting cgroup driver to use...
	I1020 13:23:41.909247  495732 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:23:41.909354  495732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:23:41.928518  495732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:23:41.941575  495732 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:23:41.941667  495732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:23:41.958398  495732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:23:41.978668  495732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:23:42.129701  495732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:23:42.288962  495732 docker.go:234] disabling docker service ...
	I1020 13:23:42.289086  495732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:23:42.316816  495732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:23:42.331358  495732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:23:42.459663  495732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:23:42.597817  495732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:23:42.612394  495732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:23:42.626931  495732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:23:42.627051  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.636599  495732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:23:42.636721  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.645732  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.654588  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.663713  495732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:23:42.671940  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.680814  495732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.694581  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.703532  495732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:23:42.712103  495732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:23:42.720067  495732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:42.837729  495732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:23:42.969550  495732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:23:42.969663  495732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:23:42.973745  495732 start.go:563] Will wait 60s for crictl version
	I1020 13:23:42.973857  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:42.977380  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:23:43.012109  495732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:23:43.012228  495732 ssh_runner.go:195] Run: crio --version
	I1020 13:23:43.045643  495732 ssh_runner.go:195] Run: crio --version
	I1020 13:23:43.081518  495732 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:23:43.084440  495732 cli_runner.go:164] Run: docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:23:43.100830  495732 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:23:43.105256  495732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:23:43.115073  495732 kubeadm.go:883] updating cluster {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:23:43.115193  495732 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:23:43.115243  495732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:23:43.141278  495732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1020 13:23:43.141300  495732 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1020 13:23:43.141335  495732 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:43.141540  495732 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.141633  495732 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.141744  495732 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.141840  495732 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.141931  495732 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1020 13:23:43.142020  495732 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.142107  495732 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.143005  495732 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1020 13:23:43.143228  495732 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.143352  495732 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.143480  495732 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.143603  495732 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:43.143891  495732 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.144149  495732 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.144292  495732 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.398664  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1020 13:23:43.418199  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.419018  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.430229  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.451957  495732 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1020 13:23:43.452088  495732 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1020 13:23:43.452179  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.452782  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.454653  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.491911  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.506816  495732 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1020 13:23:43.506909  495732 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.506987  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.567443  495732 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1020 13:23:43.567486  495732 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.567622  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.580829  495732 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1020 13:23:43.580874  495732 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.581002  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.590580  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 13:23:43.590720  495732 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1020 13:23:43.590754  495732 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.590801  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.590862  495732 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1020 13:23:43.590887  495732 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.590951  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.612757  495732 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1020 13:23:43.612947  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.613077  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.613206  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.613282  495732 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.613339  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.631325  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.631475  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 13:23:43.631527  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.708677  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.708767  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.708850  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.708916  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.749742  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.749814  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.749864  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 13:23:43.781231  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.796855  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.829903  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.830008  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.892743  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1020 13:23:43.892868  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1020 13:23:43.892994  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.893070  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.893159  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.898103  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1020 13:23:43.898278  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1020 13:23:43.931487  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1020 13:23:43.931579  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1020 13:23:43.931679  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 13:23:43.931808  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 13:23:43.964673  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1020 13:23:43.964898  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1020 13:23:43.964938  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1020 13:23:43.964758  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1020 13:23:43.965046  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1020 13:23:43.965032  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1020 13:23:43.964782  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1020 13:23:43.964806  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1020 13:23:43.965237  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 13:23:43.964842  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1020 13:23:43.965295  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1020 13:23:43.964879  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1020 13:23:43.965329  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1020 13:23:43.965183  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1020 13:23:43.974871  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1020 13:23:43.974964  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1020 13:23:44.021067  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1020 13:23:44.021103  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1020 13:23:44.021161  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1020 13:23:44.021172  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1020 13:23:44.055314  495732 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1020 13:23:44.055442  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1020 13:23:43.489761  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:45.989236  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:44.342983  495732 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1020 13:23:44.343228  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:44.482181  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1020 13:23:44.498968  495732 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1020 13:23:44.499009  495732 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:44.499080  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:44.541675  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 13:23:44.541782  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 13:23:44.579761  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:46.393815  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.85200341s)
	I1020 13:23:46.393848  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1020 13:23:46.393866  495732 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1020 13:23:46.393904  495732 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.814045076s)
	I1020 13:23:46.393938  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1020 13:23:46.394005  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:48.128751  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.734785609s)
	I1020 13:23:48.128774  495732 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.734727228s)
	I1020 13:23:48.128856  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:48.128780  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1020 13:23:48.128926  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1020 13:23:48.128949  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1020 13:23:48.488168  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	I1020 13:23:48.988950  492109 pod_ready.go:94] pod "coredns-66bc5c9577-9hxmm" is "Ready"
	I1020 13:23:48.988977  492109 pod_ready.go:86] duration metric: took 40.50703476s for pod "coredns-66bc5c9577-9hxmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:48.992321  492109 pod_ready.go:83] waiting for pod "etcd-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:48.998401  492109 pod_ready.go:94] pod "etcd-embed-certs-979197" is "Ready"
	I1020 13:23:48.998426  492109 pod_ready.go:86] duration metric: took 6.076339ms for pod "etcd-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.003327  492109 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.011180  492109 pod_ready.go:94] pod "kube-apiserver-embed-certs-979197" is "Ready"
	I1020 13:23:49.011257  492109 pod_ready.go:86] duration metric: took 7.9054ms for pod "kube-apiserver-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.013941  492109 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.186361  492109 pod_ready.go:94] pod "kube-controller-manager-embed-certs-979197" is "Ready"
	I1020 13:23:49.186431  492109 pod_ready.go:86] duration metric: took 172.420101ms for pod "kube-controller-manager-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.387135  492109 pod_ready.go:83] waiting for pod "kube-proxy-gf2bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.788119  492109 pod_ready.go:94] pod "kube-proxy-gf2bz" is "Ready"
	I1020 13:23:49.788164  492109 pod_ready.go:86] duration metric: took 401.003968ms for pod "kube-proxy-gf2bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.986360  492109 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:50.385855  492109 pod_ready.go:94] pod "kube-scheduler-embed-certs-979197" is "Ready"
	I1020 13:23:50.385878  492109 pod_ready.go:86] duration metric: took 399.490276ms for pod "kube-scheduler-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:50.385889  492109 pod_ready.go:40] duration metric: took 41.908048897s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:23:50.457047  492109 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:23:50.460779  492109 out.go:179] * Done! kubectl is now configured to use "embed-certs-979197" cluster and "default" namespace by default
	I1020 13:23:49.584031  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.455063233s)
	I1020 13:23:49.584061  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1020 13:23:49.584080  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 13:23:49.584129  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 13:23:49.584189  495732 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.455322337s)
	I1020 13:23:49.584218  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1020 13:23:49.584292  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1020 13:23:51.079902  495732 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.495585197s)
	I1020 13:23:51.079934  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1020 13:23:51.079976  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1020 13:23:51.080109  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.495963909s)
	I1020 13:23:51.080119  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1020 13:23:51.080135  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 13:23:51.080177  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 13:23:52.515243  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.435045358s)
	I1020 13:23:52.515268  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1020 13:23:52.515290  495732 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1020 13:23:52.515340  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1020 13:23:56.389920  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.874552149s)
	I1020 13:23:56.389961  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1020 13:23:56.389984  495732 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1020 13:23:56.390067  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1020 13:23:56.966944  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1020 13:23:56.966980  495732 cache_images.go:124] Successfully loaded all cached images
	I1020 13:23:56.966987  495732 cache_images.go:93] duration metric: took 13.825673818s to LoadCachedImages
	I1020 13:23:56.966998  495732 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 13:23:56.967084  495732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-744804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:23:56.967163  495732 ssh_runner.go:195] Run: crio config
	I1020 13:23:57.028759  495732 cni.go:84] Creating CNI manager for ""
	I1020 13:23:57.028782  495732 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:23:57.028803  495732 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:23:57.028826  495732 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744804 NodeName:no-preload-744804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:23:57.028956  495732 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:23:57.029039  495732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:23:57.038306  495732 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1020 13:23:57.038377  495732 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1020 13:23:57.046452  495732 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1020 13:23:57.046550  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1020 13:23:57.047331  495732 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1020 13:23:57.047340  495732 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1020 13:23:57.051936  495732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1020 13:23:57.051973  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1020 13:23:57.871567  495732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:23:57.888627  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1020 13:23:57.900018  495732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1020 13:23:57.900612  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1020 13:23:57.986699  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1020 13:23:57.994340  495732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1020 13:23:57.996056  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1020 13:23:58.552131  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:23:58.562054  495732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 13:23:58.576448  495732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:23:58.590815  495732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1020 13:23:58.605931  495732 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:23:58.610262  495732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:23:58.620970  495732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:58.752341  495732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:23:58.771590  495732 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804 for IP: 192.168.76.2
	I1020 13:23:58.771611  495732 certs.go:195] generating shared ca certs ...
	I1020 13:23:58.771627  495732 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:58.771765  495732 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:23:58.771812  495732 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:23:58.771821  495732 certs.go:257] generating profile certs ...
	I1020 13:23:58.771874  495732 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key
	I1020 13:23:58.771890  495732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt with IP's: []
	I1020 13:23:59.091733  495732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt ...
	I1020 13:23:59.091763  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: {Name:mkf619d1e3f023a0bc178359e535b6d7341bb9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.092000  495732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key ...
	I1020 13:23:59.092018  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key: {Name:mk01a75192aa0d6293e7c41c457b5e86827600cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.092108  495732 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a
	I1020 13:23:59.092121  495732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1020 13:23:59.205010  495732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a ...
	I1020 13:23:59.205041  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a: {Name:mk972c380561861417af1b11c5f7cd9fc891ee82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.205210  495732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a ...
	I1020 13:23:59.205227  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a: {Name:mk217e5a37836290e476f7e9911b017161dc3657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.205317  495732 certs.go:382] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt
	I1020 13:23:59.205397  495732 certs.go:386] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key
	I1020 13:23:59.205450  495732 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key
	I1020 13:23:59.205470  495732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt with IP's: []
	I1020 13:23:59.304245  495732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt ...
	I1020 13:23:59.304273  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt: {Name:mk0bd19d94d02f64a125c00c2925a5b36a5de40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.304475  495732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key ...
	I1020 13:23:59.304492  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key: {Name:mkc47d7bfea320fe5558e804001a9bbf0d53256e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.304682  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:23:59.304732  495732 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:23:59.304746  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:23:59.304772  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:23:59.304798  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:23:59.304823  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:23:59.304870  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:23:59.305420  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:23:59.324779  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:23:59.344406  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:23:59.362900  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:23:59.381972  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 13:23:59.401343  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:23:59.421833  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:23:59.441856  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:23:59.459277  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:23:59.476527  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:23:59.494807  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:23:59.513082  495732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:23:59.527108  495732 ssh_runner.go:195] Run: openssl version
	I1020 13:23:59.535732  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:23:59.545601  495732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:59.549687  495732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:59.549753  495732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:59.591732  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:23:59.600741  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:23:59.609654  495732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:23:59.614559  495732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:23:59.614648  495732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:23:59.657218  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:23:59.666162  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:23:59.674930  495732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:23:59.678828  495732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:23:59.678939  495732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:23:59.720663  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:23:59.729358  495732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:23:59.733500  495732 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 13:23:59.733559  495732 kubeadm.go:400] StartCluster: {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:23:59.733633  495732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:23:59.733695  495732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:23:59.762824  495732 cri.go:89] found id: ""
	I1020 13:23:59.762905  495732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:23:59.771122  495732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 13:23:59.779120  495732 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 13:23:59.779212  495732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 13:23:59.787580  495732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 13:23:59.787600  495732 kubeadm.go:157] found existing configuration files:
	
	I1020 13:23:59.787674  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 13:23:59.795544  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 13:23:59.795623  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 13:23:59.803247  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 13:23:59.811526  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 13:23:59.811654  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 13:23:59.819391  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 13:23:59.827768  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 13:23:59.827852  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 13:23:59.835753  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 13:23:59.843939  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 13:23:59.844060  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 13:23:59.851960  495732 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 13:23:59.892691  495732 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 13:23:59.892957  495732 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 13:23:59.918645  495732 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 13:23:59.918807  495732 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1020 13:23:59.918867  495732 kubeadm.go:318] OS: Linux
	I1020 13:23:59.918961  495732 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 13:23:59.919067  495732 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1020 13:23:59.919148  495732 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 13:23:59.919213  495732 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 13:23:59.919281  495732 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 13:23:59.919373  495732 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 13:23:59.919518  495732 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 13:23:59.919604  495732 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 13:23:59.919682  495732 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1020 13:23:59.983280  495732 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 13:23:59.983462  495732 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 13:23:59.983591  495732 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 13:23:59.998594  495732 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 13:24:00.011971  495732 out.go:252]   - Generating certificates and keys ...
	I1020 13:24:00.012174  495732 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 13:24:00.012253  495732 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 13:24:00.972081  495732 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 13:24:02.422596  495732 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 13:24:03.263081  495732 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 13:24:03.314111  495732 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 13:24:03.520244  495732 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 13:24:03.520844  495732 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-744804] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1020 13:24:03.825287  495732 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 13:24:03.825627  495732 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-744804] and IPs [192.168.76.2 127.0.0.1 ::1]
	
	
	==> CRI-O <==
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.611169674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.616922324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.617086413Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.617156969Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.621589577Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.621746593Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.62189898Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.627347389Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.627761162Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.627868364Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.634804545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.634840181Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.607156542Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=23f2e024-0581-4495-b01b-5cb481d2b579 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.608888503Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f7ab58c6-1cad-43bd-9655-03ddaf067ae3 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.61029502Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper" id=3cf42d3b-f7a2-4de9-b759-cf220049bd88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.610392432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.624485886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.625222109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.685212142Z" level=info msg="Created container 626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper" id=3cf42d3b-f7a2-4de9-b759-cf220049bd88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.690081761Z" level=info msg="Starting container: 626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190" id=0478b198-6a2f-4eb2-9790-efd39c398185 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.69276658Z" level=info msg="Started container" PID=1708 containerID=626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper id=0478b198-6a2f-4eb2-9790-efd39c398185 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29
	Oct 20 13:23:55 embed-certs-979197 conmon[1704]: conmon 626170f6984d91bac964 <ninfo>: container 1708 exited with status 1
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.894515414Z" level=info msg="Removing container: a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123" id=f1eee6e3-34a7-46a7-87fa-92299fdfea0d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.947790922Z" level=info msg="Error loading conmon cgroup of container a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123: cgroup deleted" id=f1eee6e3-34a7-46a7-87fa-92299fdfea0d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.978336022Z" level=info msg="Removed container a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper" id=f1eee6e3-34a7-46a7-87fa-92299fdfea0d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	626170f6984d9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   09495344c60fe       dashboard-metrics-scraper-6ffb444bf9-dttk6   kubernetes-dashboard
	a04ddd0eaf35f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   9e1bcbc4f6e94       storage-provisioner                          kube-system
	9f363f84681d1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   4520fcd2ce6f2       kubernetes-dashboard-855c9754f9-9zg9f        kubernetes-dashboard
	a05f9f7328a59       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   f3ccc82a5720e       coredns-66bc5c9577-9hxmm                     kube-system
	3ac82206b7112       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   3b1f3badfa230       busybox                                      default
	e9a76dd7d82fe       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   9e1bcbc4f6e94       storage-provisioner                          kube-system
	ec4f1b2741d1b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   82a6558c3eeb6       kindnet-jzxdn                                kube-system
	b06d2e7205597       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   089adb56dd90b       kube-proxy-gf2bz                             kube-system
	e584e506b7520       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7e4be4bae1e57       etcd-embed-certs-979197                      kube-system
	fdf35c27cf71e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2ec466376c96a       kube-scheduler-embed-certs-979197            kube-system
	aa8e7b9b68af4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   89d3865220ab7       kube-apiserver-embed-certs-979197            kube-system
	631a35129ac4d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   523b14ba3b1d7       kube-controller-manager-embed-certs-979197   kube-system
	
	
	==> coredns [a05f9f7328a594e26d4af40388b3fa293e0634a69f2ae56945b323b31b65e515] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54303 - 57002 "HINFO IN 4889675404920174503.7655532191343327730. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014345845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-979197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-979197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=embed-certs-979197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:22:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-979197
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:22:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-979197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                746efe57-6e86-4a6f-8038-c5a3b70dbd80
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-9hxmm                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-embed-certs-979197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m
	  kube-system                 kindnet-jzxdn                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-embed-certs-979197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-embed-certs-979197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-gf2bz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-embed-certs-979197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dttk6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9zg9f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 113s                  kube-proxy       
	  Normal   Starting                 57s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m10s)  kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m10s)  kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m10s)  kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m                    kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                    kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m                    kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                  node-controller  Node embed-certs-979197 event: Registered Node embed-certs-979197 in Controller
	  Normal   NodeReady                104s                  kubelet          Node embed-certs-979197 status is now: NodeReady
	  Normal   Starting                 66s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)     kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)     kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)     kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                   node-controller  Node embed-certs-979197 event: Registered Node embed-certs-979197 in Controller
	
	
	==> dmesg <==
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e584e506b7520e3f3fc6c5efbd25f505db7a034d9a0b978b8af3a90afb94f84b] <==
	{"level":"warn","ts":"2025-10-20T13:23:05.063758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.084603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.105321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.163653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.199331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.234464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.270776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.299116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.331952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.366327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.389856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.433424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.451590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.482490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.524491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.544245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.561377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.590770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.610780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.641144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.685862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.693150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.710323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.726099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.809777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56758","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:24:06 up  3:06,  0 user,  load average: 2.06, 2.50, 2.43
	Linux embed-certs-979197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec4f1b2741d1bcf1038b63753cb9726f96d12224de84119b14ed7d03a8e887da] <==
	I1020 13:23:07.405071       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:23:07.405440       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 13:23:07.405624       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:23:07.405674       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:23:07.405711       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:23:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:23:07.610858       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:23:07.610876       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:23:07.610885       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:23:07.611202       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:23:37.610792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 13:23:37.611127       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:23:37.611217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:23:37.611738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1020 13:23:39.211147       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:23:39.211182       1 metrics.go:72] Registering metrics
	I1020 13:23:39.211258       1 controller.go:711] "Syncing nftables rules"
	I1020 13:23:47.610915       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 13:23:47.610957       1 main.go:301] handling current node
	I1020 13:23:57.610825       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 13:23:57.610915       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aa8e7b9b68af423d774d170b1c024dba6f7323fa1d41441cd1e8ee87d1cd0140] <==
	I1020 13:23:06.719756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:23:06.719762       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:23:06.720960       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:23:06.723551       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 13:23:06.723589       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 13:23:06.723796       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 13:23:06.734147       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:23:06.738212       1 policy_source.go:240] refreshing policies
	I1020 13:23:06.739170       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:23:06.780268       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:23:06.780326       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:23:06.784557       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:23:06.817672       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1020 13:23:06.875238       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:23:07.394289       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:23:08.041314       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:23:08.110737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:23:08.158586       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:23:08.177693       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:23:08.335032       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.234.67"}
	I1020 13:23:08.381464       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.97.105"}
	E1020 13:23:08.384233       1 repairip.go:372] "Unhandled Error" err="the ClusterIP [IPv4]: 10.96.97.105 for Service kubernetes-dashboard/dashboard-metrics-scraper is not allocated; repairing" logger="UnhandledError"
	I1020 13:23:10.288454       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:23:10.494113       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:23:10.587030       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [631a35129ac4de8ec7ce893c70fd5f816fb79609c9e434d0fb0f0fad3f58552b] <==
	I1020 13:23:10.044678       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:23:10.044733       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:23:10.049412       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:23:10.053740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:23:10.054935       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:23:10.059221       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 13:23:10.061565       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 13:23:10.064420       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 13:23:10.070741       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 13:23:10.071028       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:23:10.080333       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 13:23:10.080636       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 13:23:10.081196       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:23:10.081279       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:23:10.081292       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:23:10.081303       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 13:23:10.081312       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 13:23:10.081324       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 13:23:10.081332       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:23:10.091461       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:23:10.092536       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:23:10.098774       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:23:10.098901       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:23:10.098991       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-979197"
	I1020 13:23:10.099039       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [b06d2e7205597130313084e1717d17e5b507cae70710ab71067333cf26a81bff] <==
	I1020 13:23:07.887123       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:23:08.036019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:23:08.145964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:23:08.156543       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 13:23:08.156737       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:23:08.412936       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:23:08.414130       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:23:08.419506       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:23:08.419935       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:23:08.420182       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:23:08.421723       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:23:08.432834       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:23:08.433375       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 13:23:08.423583       1 config.go:309] "Starting node config controller"
	I1020 13:23:08.433611       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:23:08.433619       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:23:08.422280       1 config.go:200] "Starting service config controller"
	I1020 13:23:08.433653       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:23:08.422614       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:23:08.433675       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:23:08.433682       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:23:08.534732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fdf35c27cf71e3c6a3b8814a9f32bced0ae742f30f72aff6760a85b4a3a7145b] <==
	I1020 13:23:03.816102       1 serving.go:386] Generated self-signed cert in-memory
	W1020 13:23:06.773823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 13:23:06.773857       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 13:23:06.773868       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 13:23:06.773876       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 13:23:06.842974       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:23:06.843003       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:23:06.845504       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:23:06.845570       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:23:06.845656       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:23:06.845676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:23:06.948050       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:23:11 embed-certs-979197 kubelet[772]: W1020 13:23:11.049719     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/crio-09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29 WatchSource:0}: Error finding container 09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29: Status 404 returned error can't find the container with id 09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29
	Oct 20 13:23:11 embed-certs-979197 kubelet[772]: W1020 13:23:11.067012     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/crio-4520fcd2ce6f2dba312c0c0e9e970399ee724a548489685001b97ae7bf8cc091 WatchSource:0}: Error finding container 4520fcd2ce6f2dba312c0c0e9e970399ee724a548489685001b97ae7bf8cc091: Status 404 returned error can't find the container with id 4520fcd2ce6f2dba312c0c0e9e970399ee724a548489685001b97ae7bf8cc091
	Oct 20 13:23:16 embed-certs-979197 kubelet[772]: I1020 13:23:16.775616     772 scope.go:117] "RemoveContainer" containerID="3e883a34e108625af730330052ef9c85c72257ed737a8ac60b8bd77056bb88c0"
	Oct 20 13:23:17 embed-certs-979197 kubelet[772]: I1020 13:23:17.777014     772 scope.go:117] "RemoveContainer" containerID="3e883a34e108625af730330052ef9c85c72257ed737a8ac60b8bd77056bb88c0"
	Oct 20 13:23:17 embed-certs-979197 kubelet[772]: I1020 13:23:17.777295     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:17 embed-certs-979197 kubelet[772]: E1020 13:23:17.777438     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:21 embed-certs-979197 kubelet[772]: I1020 13:23:21.021791     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:21 embed-certs-979197 kubelet[772]: E1020 13:23:21.022004     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:32 embed-certs-979197 kubelet[772]: I1020 13:23:32.608711     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:32 embed-certs-979197 kubelet[772]: I1020 13:23:32.819118     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:33 embed-certs-979197 kubelet[772]: I1020 13:23:33.823300     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:33 embed-certs-979197 kubelet[772]: E1020 13:23:33.823481     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:33 embed-certs-979197 kubelet[772]: I1020 13:23:33.846491     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zg9f" podStartSLOduration=13.605175029 podStartE2EDuration="23.845790601s" podCreationTimestamp="2025-10-20 13:23:10 +0000 UTC" firstStartedPulling="2025-10-20 13:23:11.070098754 +0000 UTC m=+10.657307863" lastFinishedPulling="2025-10-20 13:23:21.310714326 +0000 UTC m=+20.897923435" observedRunningTime="2025-10-20 13:23:21.808673104 +0000 UTC m=+21.395882246" watchObservedRunningTime="2025-10-20 13:23:33.845790601 +0000 UTC m=+33.432999701"
	Oct 20 13:23:37 embed-certs-979197 kubelet[772]: I1020 13:23:37.834493     772 scope.go:117] "RemoveContainer" containerID="e9a76dd7d82fef4db24bc979f481c627369eff8a5527c155e7498370b3f8a2c7"
	Oct 20 13:23:41 embed-certs-979197 kubelet[772]: I1020 13:23:41.021840     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:41 embed-certs-979197 kubelet[772]: E1020 13:23:41.022005     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: I1020 13:23:55.606213     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: I1020 13:23:55.884787     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: I1020 13:23:55.885118     772 scope.go:117] "RemoveContainer" containerID="626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: E1020 13:23:55.885282     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:24:01 embed-certs-979197 kubelet[772]: I1020 13:24:01.021865     772 scope.go:117] "RemoveContainer" containerID="626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190"
	Oct 20 13:24:01 embed-certs-979197 kubelet[772]: E1020 13:24:01.022067     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:24:02 embed-certs-979197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:24:03 embed-certs-979197 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:24:03 embed-certs-979197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9f363f84681d1a5440f5360011859037100e536b500edc9635f8c9c0b5efa08f] <==
	2025/10/20 13:23:21 Using namespace: kubernetes-dashboard
	2025/10/20 13:23:21 Using in-cluster config to connect to apiserver
	2025/10/20 13:23:21 Using secret token for csrf signing
	2025/10/20 13:23:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:23:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:23:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 13:23:21 Generating JWE encryption key
	2025/10/20 13:23:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:23:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:23:22 Initializing JWE encryption key from synchronized object
	2025/10/20 13:23:22 Creating in-cluster Sidecar client
	2025/10/20 13:23:22 Serving insecurely on HTTP port: 9090
	2025/10/20 13:23:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:23:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:23:21 Starting overwatch
	
	
	==> storage-provisioner [a04ddd0eaf35fc812a2c522888d81d485b2f3a10f3d187d544c6d233a6aec6e0] <==
	I1020 13:23:37.923268       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:23:37.923578       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:23:37.932735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:41.387428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:45.648257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:49.247286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:52.301325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:55.327723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:55.335751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:55.335888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:23:55.336415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"390cf0fd-e9c8-4ac9-a37f-95614000b7ae", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-979197_cab80016-8017-48ca-b77d-ae0221310200 became leader
	I1020 13:23:55.341029       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-979197_cab80016-8017-48ca-b77d-ae0221310200!
	W1020 13:23:55.349943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:55.358624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:55.441353       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-979197_cab80016-8017-48ca-b77d-ae0221310200!
	W1020 13:23:57.362560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:57.372343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:59.377921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:59.386307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:01.389320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:01.396573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:03.400128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:03.412591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:05.419375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:05.427101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e9a76dd7d82fef4db24bc979f481c627369eff8a5527c155e7498370b3f8a2c7] <==
	I1020 13:23:07.396920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:23:37.399915       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-979197 -n embed-certs-979197
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-979197 -n embed-certs-979197: exit status 2 (522.013697ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-979197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-979197
helpers_test.go:243: (dbg) docker inspect embed-certs-979197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b",
	        "Created": "2025-10-20T13:21:40.070634794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:22:53.072175178Z",
	            "FinishedAt": "2025-10-20T13:22:51.912590863Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/hosts",
	        "LogPath": "/var/lib/docker/containers/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b-json.log",
	        "Name": "/embed-certs-979197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-979197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-979197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b",
	                "LowerDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/78d311a13934c06b24322c6f1526e4bdcc85b33a5e696a18733fedb298e81c6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-979197",
	                "Source": "/var/lib/docker/volumes/embed-certs-979197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-979197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-979197",
	                "name.minikube.sigs.k8s.io": "embed-certs-979197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "00ccb30000ea445505461e620ff7e0776cc8a39cc12c6b9ab591d8ad61cc34fa",
	            "SandboxKey": "/var/run/docker/netns/00ccb30000ea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-979197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:50:9d:09:dd:99",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bde21224527a25cf82271eb68321115d5ca91f933b235b8b28a8c48a7e3f01e5",
	                    "EndpointID": "0da221d429abff228e3d4f206f0ed21dc626b4b4bd1f8873719e216785a9e8c6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-979197",
	                        "737cd86e9d78"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197: exit status 2 (468.186033ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-979197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-979197 logs -n 25: (1.579647452s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:19 UTC │ 20 Oct 25 13:21 UTC │
	│ image   │ old-k8s-version-995203 image list --format=json                                                                                                                                                                                               │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ pause   │ -p old-k8s-version-995203 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │                     │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:21 UTC │
	│ delete  │ -p cert-expiration-066011                                                                                                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:21 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:23:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:23:34.266911  495732 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:23:34.267076  495732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:23:34.267099  495732 out.go:374] Setting ErrFile to fd 2...
	I1020 13:23:34.267117  495732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:23:34.267400  495732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:23:34.268872  495732 out.go:368] Setting JSON to false
	I1020 13:23:34.270109  495732 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11165,"bootTime":1760955450,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:23:34.270221  495732 start.go:141] virtualization:  
	I1020 13:23:34.275937  495732 out.go:179] * [no-preload-744804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:23:34.279134  495732 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:23:34.279201  495732 notify.go:220] Checking for updates...
	I1020 13:23:34.285170  495732 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:23:34.288137  495732 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:23:34.291102  495732 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:23:34.293984  495732 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:23:34.296910  495732 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:23:34.300445  495732 config.go:182] Loaded profile config "embed-certs-979197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:23:34.300579  495732 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:23:34.328469  495732 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:23:34.328592  495732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:23:34.392901  495732 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:23:34.382836678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:23:34.393004  495732 docker.go:318] overlay module found
	I1020 13:23:34.396316  495732 out.go:179] * Using the docker driver based on user configuration
	I1020 13:23:34.399193  495732 start.go:305] selected driver: docker
	I1020 13:23:34.399215  495732 start.go:925] validating driver "docker" against <nil>
	I1020 13:23:34.399242  495732 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:23:34.400036  495732 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:23:34.460941  495732 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:23:34.451563958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:23:34.461116  495732 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 13:23:34.461341  495732 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:23:34.464345  495732 out.go:179] * Using Docker driver with root privileges
	I1020 13:23:34.467197  495732 cni.go:84] Creating CNI manager for ""
	I1020 13:23:34.467280  495732 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:23:34.467293  495732 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:23:34.467372  495732 start.go:349] cluster config:
	{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:23:34.470393  495732 out.go:179] * Starting "no-preload-744804" primary control-plane node in "no-preload-744804" cluster
	I1020 13:23:34.473205  495732 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:23:34.475963  495732 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:23:34.478771  495732 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:23:34.478868  495732 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:23:34.478952  495732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:23:34.478987  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json: {Name:mkd1d2b9e52656dca22053032defc126d51cb142 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:34.481296  495732 cache.go:107] acquiring lock: {Name:mk2466d3c957a995adbebbabeab0fa3cc60b0749 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.481429  495732 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1020 13:23:34.481481  495732 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.633769ms
	I1020 13:23:34.481496  495732 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1020 13:23:34.481515  495732 cache.go:107] acquiring lock: {Name:mk91e48e01c9d742f280bc2f9044086cb15ac8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.482365  495732 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:34.482882  495732 cache.go:107] acquiring lock: {Name:mk06b7edc57ee881bc4af5e7d1c0bb5270ebff49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.483023  495732 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:34.483293  495732 cache.go:107] acquiring lock: {Name:mk1d0a9075d8d12111d126a101053db6ac0a7b69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.483412  495732 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:34.483634  495732 cache.go:107] acquiring lock: {Name:mk2f501eec0d7af6312aef6efa1f5bbad5f4d684 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.485281  495732 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:34.485526  495732 cache.go:107] acquiring lock: {Name:mk76c9e0dd61216d0c0ba53e6cfb9cbe19ddfd70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.485617  495732 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1020 13:23:34.485790  495732 cache.go:107] acquiring lock: {Name:mkd8eb3de224a6da14efa26f40075e815e71b6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.485865  495732 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:34.486026  495732 cache.go:107] acquiring lock: {Name:mkf695cbf431ff83306d5e1211f07fc194d769c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.486103  495732 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:34.489939  495732 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:34.491245  495732 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:34.491407  495732 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1020 13:23:34.491552  495732 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:34.491683  495732 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:34.492324  495732 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:34.492748  495732 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:34.506507  495732 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:23:34.506530  495732 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:23:34.506549  495732 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:23:34.506593  495732 start.go:360] acquireMachinesLock for no-preload-744804: {Name:mk60261f5e12334720a2e0b8e33ce6265dbb09b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:23:34.506721  495732 start.go:364] duration metric: took 105.487µs to acquireMachinesLock for "no-preload-744804"
	I1020 13:23:34.506753  495732 start.go:93] Provisioning new machine with config: &{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:23:34.506826  495732 start.go:125] createHost starting for "" (driver="docker")
	W1020 13:23:33.988847  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:36.488946  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	I1020 13:23:34.510516  495732 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 13:23:34.510750  495732 start.go:159] libmachine.API.Create for "no-preload-744804" (driver="docker")
	I1020 13:23:34.510791  495732 client.go:168] LocalClient.Create starting
	I1020 13:23:34.510871  495732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 13:23:34.510909  495732 main.go:141] libmachine: Decoding PEM data...
	I1020 13:23:34.510926  495732 main.go:141] libmachine: Parsing certificate...
	I1020 13:23:34.510985  495732 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 13:23:34.511011  495732 main.go:141] libmachine: Decoding PEM data...
	I1020 13:23:34.511027  495732 main.go:141] libmachine: Parsing certificate...
	I1020 13:23:34.511429  495732 cli_runner.go:164] Run: docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 13:23:34.537560  495732 cli_runner.go:211] docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 13:23:34.537647  495732 network_create.go:284] running [docker network inspect no-preload-744804] to gather additional debugging logs...
	I1020 13:23:34.537671  495732 cli_runner.go:164] Run: docker network inspect no-preload-744804
	W1020 13:23:34.554942  495732 cli_runner.go:211] docker network inspect no-preload-744804 returned with exit code 1
	I1020 13:23:34.554975  495732 network_create.go:287] error running [docker network inspect no-preload-744804]: docker network inspect no-preload-744804: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-744804 not found
	I1020 13:23:34.554989  495732 network_create.go:289] output of [docker network inspect no-preload-744804]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-744804 not found
	
	** /stderr **
	I1020 13:23:34.555093  495732 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:23:34.571357  495732 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31214b196961 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:99:57:10:1b:40} reservation:<nil>}
	I1020 13:23:34.571646  495732 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf6e9e751b4a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:0d:2b:68:24:bc} reservation:<nil>}
	I1020 13:23:34.572003  495732 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-076921d0625d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:c5:51:b1:3d:0c} reservation:<nil>}
	I1020 13:23:34.572487  495732 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c42e20}
	I1020 13:23:34.572515  495732 network_create.go:124] attempt to create docker network no-preload-744804 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1020 13:23:34.572573  495732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-744804 no-preload-744804
	I1020 13:23:34.651144  495732 network_create.go:108] docker network no-preload-744804 192.168.76.0/24 created
	I1020 13:23:34.651176  495732 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-744804" container
	I1020 13:23:34.651255  495732 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 13:23:34.670420  495732 cli_runner.go:164] Run: docker volume create no-preload-744804 --label name.minikube.sigs.k8s.io=no-preload-744804 --label created_by.minikube.sigs.k8s.io=true
	I1020 13:23:34.688114  495732 oci.go:103] Successfully created a docker volume no-preload-744804
	I1020 13:23:34.688202  495732 cli_runner.go:164] Run: docker run --rm --name no-preload-744804-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-744804 --entrypoint /usr/bin/test -v no-preload-744804:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 13:23:34.871927  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1020 13:23:34.920723  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1020 13:23:34.928759  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1020 13:23:34.956150  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1020 13:23:34.963962  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1020 13:23:34.996644  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1020 13:23:35.008742  495732 cache.go:162] opening:  /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1020 13:23:35.016718  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1020 13:23:35.016752  495732 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 531.229706ms
	I1020 13:23:35.016766  495732 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1020 13:23:35.298558  495732 oci.go:107] Successfully prepared a docker volume no-preload-744804
	I1020 13:23:35.298591  495732 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1020 13:23:35.298724  495732 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 13:23:35.298853  495732 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 13:23:35.379303  495732 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-744804 --name no-preload-744804 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-744804 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-744804 --network no-preload-744804 --ip 192.168.76.2 --volume no-preload-744804:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 13:23:35.504899  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1020 13:23:35.504974  495732 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 1.021342376s
	I1020 13:23:35.505001  495732 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1020 13:23:35.783104  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Running}}
	I1020 13:23:35.875630  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:23:35.917812  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1020 13:23:35.917845  495732 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.434547285s
	I1020 13:23:35.917857  495732 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1020 13:23:35.941814  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1020 13:23:35.941889  495732 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.455868633s
	I1020 13:23:35.943180  495732 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1020 13:23:35.950084  495732 cli_runner.go:164] Run: docker exec no-preload-744804 stat /var/lib/dpkg/alternatives/iptables
	I1020 13:23:35.990581  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1020 13:23:35.990619  495732 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.507734972s
	I1020 13:23:35.990632  495732 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1020 13:23:36.061968  495732 oci.go:144] the created container "no-preload-744804" has a running status.
	I1020 13:23:36.061994  495732 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa...
	I1020 13:23:36.105396  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1020 13:23:36.105467  495732 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.623952624s
	I1020 13:23:36.105492  495732 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1020 13:23:36.905674  495732 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 13:23:36.928178  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:23:36.945799  495732 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 13:23:36.945835  495732 kic_runner.go:114] Args: [docker exec --privileged no-preload-744804 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 13:23:37.012898  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:23:37.036497  495732 machine.go:93] provisionDockerMachine start ...
	I1020 13:23:37.036625  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:37.055486  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:37.055830  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:37.055840  495732 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:23:37.056529  495732 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:23:37.258280  495732 cache.go:157] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1020 13:23:37.258310  495732 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.772523351s
	I1020 13:23:37.258324  495732 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1020 13:23:37.258368  495732 cache.go:87] Successfully saved all images to host disk.
	W1020 13:23:38.987564  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:40.988115  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	I1020 13:23:40.208143  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:23:40.208170  495732 ubuntu.go:182] provisioning hostname "no-preload-744804"
	I1020 13:23:40.208235  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.225913  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:40.226230  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:40.226246  495732 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744804 && echo "no-preload-744804" | sudo tee /etc/hostname
	I1020 13:23:40.390662  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:23:40.390741  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.408727  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:40.409026  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:40.409047  495732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744804/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:23:40.557799  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:23:40.557832  495732 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:23:40.557865  495732 ubuntu.go:190] setting up certificates
	I1020 13:23:40.557876  495732 provision.go:84] configureAuth start
	I1020 13:23:40.557948  495732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:23:40.575540  495732 provision.go:143] copyHostCerts
	I1020 13:23:40.575616  495732 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:23:40.575631  495732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:23:40.575714  495732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:23:40.575820  495732 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:23:40.575831  495732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:23:40.575859  495732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:23:40.575925  495732 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:23:40.575932  495732 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:23:40.575957  495732 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:23:40.576019  495732 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.no-preload-744804 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-744804]
	I1020 13:23:40.724694  495732 provision.go:177] copyRemoteCerts
	I1020 13:23:40.724767  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:23:40.724810  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.741481  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:40.845064  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:23:40.864462  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:23:40.882009  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:23:40.899303  495732 provision.go:87] duration metric: took 341.410966ms to configureAuth
	I1020 13:23:40.899327  495732 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:23:40.899522  495732 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:23:40.899630  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:40.917590  495732 main.go:141] libmachine: Using SSH client type: native
	I1020 13:23:40.917896  495732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1020 13:23:40.917916  495732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:23:41.285682  495732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:23:41.285703  495732 machine.go:96] duration metric: took 4.249180387s to provisionDockerMachine
	I1020 13:23:41.285713  495732 client.go:171] duration metric: took 6.774909558s to LocalClient.Create
	I1020 13:23:41.285728  495732 start.go:167] duration metric: took 6.774979401s to libmachine.API.Create "no-preload-744804"
	I1020 13:23:41.285734  495732 start.go:293] postStartSetup for "no-preload-744804" (driver="docker")
	I1020 13:23:41.285744  495732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:23:41.285806  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:23:41.285851  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.307575  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.412687  495732 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:23:41.416038  495732 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:23:41.416123  495732 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:23:41.416135  495732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:23:41.416215  495732 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:23:41.416344  495732 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:23:41.416480  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:23:41.423920  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:23:41.442199  495732 start.go:296] duration metric: took 156.449501ms for postStartSetup
	I1020 13:23:41.442562  495732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:23:41.460970  495732 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:23:41.461264  495732 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:23:41.461322  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.477875  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.581915  495732 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:23:41.586709  495732 start.go:128] duration metric: took 7.079861747s to createHost
	I1020 13:23:41.586731  495732 start.go:83] releasing machines lock for "no-preload-744804", held for 7.079995395s
	I1020 13:23:41.586818  495732 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:23:41.603679  495732 ssh_runner.go:195] Run: cat /version.json
	I1020 13:23:41.603709  495732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:23:41.603734  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.603777  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:23:41.622051  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.624868  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:23:41.724000  495732 ssh_runner.go:195] Run: systemctl --version
	I1020 13:23:41.835590  495732 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:23:41.872754  495732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:23:41.877126  495732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:23:41.877207  495732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:23:41.909121  495732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 13:23:41.909197  495732 start.go:495] detecting cgroup driver to use...
	I1020 13:23:41.909247  495732 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:23:41.909354  495732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:23:41.928518  495732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:23:41.941575  495732 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:23:41.941667  495732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:23:41.958398  495732 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:23:41.978668  495732 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:23:42.129701  495732 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:23:42.288962  495732 docker.go:234] disabling docker service ...
	I1020 13:23:42.289086  495732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:23:42.316816  495732 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:23:42.331358  495732 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:23:42.459663  495732 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:23:42.597817  495732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:23:42.612394  495732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:23:42.626931  495732 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:23:42.627051  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.636599  495732 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:23:42.636721  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.645732  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.654588  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.663713  495732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:23:42.671940  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.680814  495732 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.694581  495732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:23:42.703532  495732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:23:42.712103  495732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:23:42.720067  495732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:42.837729  495732 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:23:42.969550  495732 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:23:42.969663  495732 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:23:42.973745  495732 start.go:563] Will wait 60s for crictl version
	I1020 13:23:42.973857  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:42.977380  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:23:43.012109  495732 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:23:43.012228  495732 ssh_runner.go:195] Run: crio --version
	I1020 13:23:43.045643  495732 ssh_runner.go:195] Run: crio --version
	I1020 13:23:43.081518  495732 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:23:43.084440  495732 cli_runner.go:164] Run: docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:23:43.100830  495732 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:23:43.105256  495732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:23:43.115073  495732 kubeadm.go:883] updating cluster {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:23:43.115193  495732 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:23:43.115243  495732 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:23:43.141278  495732 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1020 13:23:43.141300  495732 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1020 13:23:43.141335  495732 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:43.141540  495732 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.141633  495732 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.141744  495732 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.141840  495732 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.141931  495732 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1020 13:23:43.142020  495732 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.142107  495732 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.143005  495732 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1020 13:23:43.143228  495732 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.143352  495732 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.143480  495732 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.143603  495732 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:43.143891  495732 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.144149  495732 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.144292  495732 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.398664  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1020 13:23:43.418199  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.419018  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.430229  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.451957  495732 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1020 13:23:43.452088  495732 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1020 13:23:43.452179  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.452782  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.454653  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.491911  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.506816  495732 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1020 13:23:43.506909  495732 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.506987  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.567443  495732 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1020 13:23:43.567486  495732 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.567622  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.580829  495732 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1020 13:23:43.580874  495732 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.581002  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.590580  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 13:23:43.590720  495732 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1020 13:23:43.590754  495732 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.590801  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.590862  495732 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1020 13:23:43.590887  495732 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.590951  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.612757  495732 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1020 13:23:43.612947  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.613077  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.613206  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.613282  495732 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.613339  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:43.631325  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.631475  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 13:23:43.631527  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.708677  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.708767  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.708850  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.708916  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.749742  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.749814  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.749864  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1020 13:23:43.781231  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.796855  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1020 13:23:43.829903  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1020 13:23:43.830008  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1020 13:23:43.892743  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1020 13:23:43.892868  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1020 13:23:43.892994  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1020 13:23:43.893070  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1020 13:23:43.893159  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1020 13:23:43.898103  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1020 13:23:43.898278  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1020 13:23:43.931487  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1020 13:23:43.931579  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1020 13:23:43.931679  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 13:23:43.931808  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 13:23:43.964673  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1020 13:23:43.964898  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1020 13:23:43.964938  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1020 13:23:43.964758  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1020 13:23:43.965046  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1020 13:23:43.965032  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1020 13:23:43.964782  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1020 13:23:43.964806  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1020 13:23:43.965237  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 13:23:43.964842  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1020 13:23:43.965295  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1020 13:23:43.964879  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1020 13:23:43.965329  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1020 13:23:43.965183  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1020 13:23:43.974871  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1020 13:23:43.974964  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1020 13:23:44.021067  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1020 13:23:44.021103  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1020 13:23:44.021161  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1020 13:23:44.021172  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1020 13:23:44.055314  495732 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1020 13:23:44.055442  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1020 13:23:43.489761  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:45.989236  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	W1020 13:23:44.342983  495732 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1020 13:23:44.343228  495732 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:44.482181  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1020 13:23:44.498968  495732 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1020 13:23:44.499009  495732 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:44.499080  495732 ssh_runner.go:195] Run: which crictl
	I1020 13:23:44.541675  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 13:23:44.541782  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1020 13:23:44.579761  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:46.393815  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.85200341s)
	I1020 13:23:46.393848  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1020 13:23:46.393866  495732 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1020 13:23:46.393904  495732 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.814045076s)
	I1020 13:23:46.393938  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1020 13:23:46.394005  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:48.128751  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.734785609s)
	I1020 13:23:48.128774  495732 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.734727228s)
	I1020 13:23:48.128856  495732 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:23:48.128780  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1020 13:23:48.128926  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1020 13:23:48.128949  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	W1020 13:23:48.488168  492109 pod_ready.go:104] pod "coredns-66bc5c9577-9hxmm" is not "Ready", error: <nil>
	I1020 13:23:48.988950  492109 pod_ready.go:94] pod "coredns-66bc5c9577-9hxmm" is "Ready"
	I1020 13:23:48.988977  492109 pod_ready.go:86] duration metric: took 40.50703476s for pod "coredns-66bc5c9577-9hxmm" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:48.992321  492109 pod_ready.go:83] waiting for pod "etcd-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:48.998401  492109 pod_ready.go:94] pod "etcd-embed-certs-979197" is "Ready"
	I1020 13:23:48.998426  492109 pod_ready.go:86] duration metric: took 6.076339ms for pod "etcd-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.003327  492109 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.011180  492109 pod_ready.go:94] pod "kube-apiserver-embed-certs-979197" is "Ready"
	I1020 13:23:49.011257  492109 pod_ready.go:86] duration metric: took 7.9054ms for pod "kube-apiserver-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.013941  492109 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.186361  492109 pod_ready.go:94] pod "kube-controller-manager-embed-certs-979197" is "Ready"
	I1020 13:23:49.186431  492109 pod_ready.go:86] duration metric: took 172.420101ms for pod "kube-controller-manager-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.387135  492109 pod_ready.go:83] waiting for pod "kube-proxy-gf2bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.788119  492109 pod_ready.go:94] pod "kube-proxy-gf2bz" is "Ready"
	I1020 13:23:49.788164  492109 pod_ready.go:86] duration metric: took 401.003968ms for pod "kube-proxy-gf2bz" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:49.986360  492109 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:50.385855  492109 pod_ready.go:94] pod "kube-scheduler-embed-certs-979197" is "Ready"
	I1020 13:23:50.385878  492109 pod_ready.go:86] duration metric: took 399.490276ms for pod "kube-scheduler-embed-certs-979197" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:23:50.385889  492109 pod_ready.go:40] duration metric: took 41.908048897s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:23:50.457047  492109 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:23:50.460779  492109 out.go:179] * Done! kubectl is now configured to use "embed-certs-979197" cluster and "default" namespace by default
	I1020 13:23:49.584031  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.455063233s)
	I1020 13:23:49.584061  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1020 13:23:49.584080  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 13:23:49.584129  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1020 13:23:49.584189  495732 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.455322337s)
	I1020 13:23:49.584218  495732 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1020 13:23:49.584292  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1020 13:23:51.079902  495732 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.495585197s)
	I1020 13:23:51.079934  495732 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1020 13:23:51.079976  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1020 13:23:51.080109  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.495963909s)
	I1020 13:23:51.080119  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1020 13:23:51.080135  495732 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 13:23:51.080177  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1020 13:23:52.515243  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.435045358s)
	I1020 13:23:52.515268  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1020 13:23:52.515290  495732 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1020 13:23:52.515340  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1020 13:23:56.389920  495732 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.874552149s)
	I1020 13:23:56.389961  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1020 13:23:56.389984  495732 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1020 13:23:56.390067  495732 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1020 13:23:56.966944  495732 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1020 13:23:56.966980  495732 cache_images.go:124] Successfully loaded all cached images
	I1020 13:23:56.966987  495732 cache_images.go:93] duration metric: took 13.825673818s to LoadCachedImages
	I1020 13:23:56.966998  495732 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 13:23:56.967084  495732 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-744804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:23:56.967163  495732 ssh_runner.go:195] Run: crio config
	I1020 13:23:57.028759  495732 cni.go:84] Creating CNI manager for ""
	I1020 13:23:57.028782  495732 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:23:57.028803  495732 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:23:57.028826  495732 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744804 NodeName:no-preload-744804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:23:57.028956  495732 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:23:57.029039  495732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:23:57.038306  495732 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1020 13:23:57.038377  495732 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1020 13:23:57.046452  495732 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1020 13:23:57.046550  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1020 13:23:57.047331  495732 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1020 13:23:57.047340  495732 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1020 13:23:57.051936  495732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1020 13:23:57.051973  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1020 13:23:57.871567  495732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:23:57.888627  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1020 13:23:57.900018  495732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1020 13:23:57.900612  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1020 13:23:57.986699  495732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1020 13:23:57.994340  495732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1020 13:23:57.996056  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1020 13:23:58.552131  495732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:23:58.562054  495732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 13:23:58.576448  495732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:23:58.590815  495732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1020 13:23:58.605931  495732 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:23:58.610262  495732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:23:58.620970  495732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:23:58.752341  495732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:23:58.771590  495732 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804 for IP: 192.168.76.2
	I1020 13:23:58.771611  495732 certs.go:195] generating shared ca certs ...
	I1020 13:23:58.771627  495732 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:58.771765  495732 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:23:58.771812  495732 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:23:58.771821  495732 certs.go:257] generating profile certs ...
	I1020 13:23:58.771874  495732 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key
	I1020 13:23:58.771890  495732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt with IP's: []
	I1020 13:23:59.091733  495732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt ...
	I1020 13:23:59.091763  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: {Name:mkf619d1e3f023a0bc178359e535b6d7341bb9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.092000  495732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key ...
	I1020 13:23:59.092018  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key: {Name:mk01a75192aa0d6293e7c41c457b5e86827600cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.092108  495732 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a
	I1020 13:23:59.092121  495732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1020 13:23:59.205010  495732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a ...
	I1020 13:23:59.205041  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a: {Name:mk972c380561861417af1b11c5f7cd9fc891ee82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.205210  495732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a ...
	I1020 13:23:59.205227  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a: {Name:mk217e5a37836290e476f7e9911b017161dc3657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.205317  495732 certs.go:382] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt.c014680a -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt
	I1020 13:23:59.205397  495732 certs.go:386] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key
	I1020 13:23:59.205450  495732 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key
	I1020 13:23:59.205470  495732 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt with IP's: []
	I1020 13:23:59.304245  495732 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt ...
	I1020 13:23:59.304273  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt: {Name:mk0bd19d94d02f64a125c00c2925a5b36a5de40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.304475  495732 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key ...
	I1020 13:23:59.304492  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key: {Name:mkc47d7bfea320fe5558e804001a9bbf0d53256e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:23:59.304682  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:23:59.304732  495732 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:23:59.304746  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:23:59.304772  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:23:59.304798  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:23:59.304823  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:23:59.304870  495732 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:23:59.305420  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:23:59.324779  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:23:59.344406  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:23:59.362900  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:23:59.381972  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 13:23:59.401343  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:23:59.421833  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:23:59.441856  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:23:59.459277  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:23:59.476527  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:23:59.494807  495732 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:23:59.513082  495732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:23:59.527108  495732 ssh_runner.go:195] Run: openssl version
	I1020 13:23:59.535732  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:23:59.545601  495732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:59.549687  495732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:59.549753  495732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:23:59.591732  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:23:59.600741  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:23:59.609654  495732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:23:59.614559  495732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:23:59.614648  495732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:23:59.657218  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:23:59.666162  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:23:59.674930  495732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:23:59.678828  495732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:23:59.678939  495732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:23:59.720663  495732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:23:59.729358  495732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:23:59.733500  495732 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 13:23:59.733559  495732 kubeadm.go:400] StartCluster: {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:23:59.733633  495732 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:23:59.733695  495732 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:23:59.762824  495732 cri.go:89] found id: ""
	I1020 13:23:59.762905  495732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:23:59.771122  495732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 13:23:59.779120  495732 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 13:23:59.779212  495732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 13:23:59.787580  495732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 13:23:59.787600  495732 kubeadm.go:157] found existing configuration files:
	
	I1020 13:23:59.787674  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 13:23:59.795544  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 13:23:59.795623  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 13:23:59.803247  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 13:23:59.811526  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 13:23:59.811654  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 13:23:59.819391  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 13:23:59.827768  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 13:23:59.827852  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 13:23:59.835753  495732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 13:23:59.843939  495732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 13:23:59.844060  495732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 13:23:59.851960  495732 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 13:23:59.892691  495732 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 13:23:59.892957  495732 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 13:23:59.918645  495732 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 13:23:59.918807  495732 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1020 13:23:59.918867  495732 kubeadm.go:318] OS: Linux
	I1020 13:23:59.918961  495732 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 13:23:59.919067  495732 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1020 13:23:59.919148  495732 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 13:23:59.919213  495732 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 13:23:59.919281  495732 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 13:23:59.919373  495732 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 13:23:59.919518  495732 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 13:23:59.919604  495732 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 13:23:59.919682  495732 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1020 13:23:59.983280  495732 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 13:23:59.983462  495732 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 13:23:59.983591  495732 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 13:23:59.998594  495732 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 13:24:00.011971  495732 out.go:252]   - Generating certificates and keys ...
	I1020 13:24:00.012174  495732 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 13:24:00.012253  495732 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 13:24:00.972081  495732 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 13:24:02.422596  495732 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 13:24:03.263081  495732 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 13:24:03.314111  495732 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 13:24:03.520244  495732 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 13:24:03.520844  495732 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-744804] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1020 13:24:03.825287  495732 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 13:24:03.825627  495732 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-744804] and IPs [192.168.76.2 127.0.0.1 ::1]
	
	
	==> CRI-O <==
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.611169674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.616922324Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.617086413Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.617156969Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.621589577Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.621746593Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.62189898Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.627347389Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.627761162Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.627868364Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.634804545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:23:47 embed-certs-979197 crio[649]: time="2025-10-20T13:23:47.634840181Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.607156542Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=23f2e024-0581-4495-b01b-5cb481d2b579 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.608888503Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f7ab58c6-1cad-43bd-9655-03ddaf067ae3 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.61029502Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper" id=3cf42d3b-f7a2-4de9-b759-cf220049bd88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.610392432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.624485886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.625222109Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.685212142Z" level=info msg="Created container 626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper" id=3cf42d3b-f7a2-4de9-b759-cf220049bd88 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.690081761Z" level=info msg="Starting container: 626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190" id=0478b198-6a2f-4eb2-9790-efd39c398185 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.69276658Z" level=info msg="Started container" PID=1708 containerID=626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper id=0478b198-6a2f-4eb2-9790-efd39c398185 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29
	Oct 20 13:23:55 embed-certs-979197 conmon[1704]: conmon 626170f6984d91bac964 <ninfo>: container 1708 exited with status 1
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.894515414Z" level=info msg="Removing container: a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123" id=f1eee6e3-34a7-46a7-87fa-92299fdfea0d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.947790922Z" level=info msg="Error loading conmon cgroup of container a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123: cgroup deleted" id=f1eee6e3-34a7-46a7-87fa-92299fdfea0d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 20 13:23:55 embed-certs-979197 crio[649]: time="2025-10-20T13:23:55.978336022Z" level=info msg="Removed container a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6/dashboard-metrics-scraper" id=f1eee6e3-34a7-46a7-87fa-92299fdfea0d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	626170f6984d9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   09495344c60fe       dashboard-metrics-scraper-6ffb444bf9-dttk6   kubernetes-dashboard
	a04ddd0eaf35f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           30 seconds ago       Running             storage-provisioner         2                   9e1bcbc4f6e94       storage-provisioner                          kube-system
	9f363f84681d1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   4520fcd2ce6f2       kubernetes-dashboard-855c9754f9-9zg9f        kubernetes-dashboard
	a05f9f7328a59       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   f3ccc82a5720e       coredns-66bc5c9577-9hxmm                     kube-system
	3ac82206b7112       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   3b1f3badfa230       busybox                                      default
	e9a76dd7d82fe       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   9e1bcbc4f6e94       storage-provisioner                          kube-system
	ec4f1b2741d1b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   82a6558c3eeb6       kindnet-jzxdn                                kube-system
	b06d2e7205597       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   089adb56dd90b       kube-proxy-gf2bz                             kube-system
	e584e506b7520       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   7e4be4bae1e57       etcd-embed-certs-979197                      kube-system
	fdf35c27cf71e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2ec466376c96a       kube-scheduler-embed-certs-979197            kube-system
	aa8e7b9b68af4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   89d3865220ab7       kube-apiserver-embed-certs-979197            kube-system
	631a35129ac4d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   523b14ba3b1d7       kube-controller-manager-embed-certs-979197   kube-system
	
	
	==> coredns [a05f9f7328a594e26d4af40388b3fa293e0634a69f2ae56945b323b31b65e515] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54303 - 57002 "HINFO IN 4889675404920174503.7655532191343327730. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014345845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-979197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-979197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=embed-certs-979197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_22_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:22:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-979197
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:21:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:23:57 +0000   Mon, 20 Oct 2025 13:22:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-979197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                746efe57-6e86-4a6f-8038-c5a3b70dbd80
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-9hxmm                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-embed-certs-979197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-jzxdn                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-embed-certs-979197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-embed-certs-979197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-gf2bz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-embed-certs-979197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dttk6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-9zg9f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 116s                   kube-proxy       
	  Normal   Starting                 60s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m12s)  kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m12s)  kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m12s)  kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m2s                   kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m2s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m2s                   kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m2s                   kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           118s                   node-controller  Node embed-certs-979197 event: Registered Node embed-certs-979197 in Controller
	  Normal   NodeReady                106s                   kubelet          Node embed-certs-979197 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node embed-certs-979197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node embed-certs-979197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node embed-certs-979197 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node embed-certs-979197 event: Registered Node embed-certs-979197 in Controller
	
	
	==> dmesg <==
	[Oct20 13:00] overlayfs: idmapped layers are currently not supported
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e584e506b7520e3f3fc6c5efbd25f505db7a034d9a0b978b8af3a90afb94f84b] <==
	{"level":"warn","ts":"2025-10-20T13:23:05.063758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.084603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.105321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.163653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.199331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.234464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.270776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.299116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.331952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.366327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.389856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.433424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.451590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.482490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.524491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.544245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.561377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.590770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.610780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.641144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.685862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.693150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.710323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.726099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:23:05.809777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56758","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:24:09 up  3:06,  0 user,  load average: 2.21, 2.53, 2.44
	Linux embed-certs-979197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ec4f1b2741d1bcf1038b63753cb9726f96d12224de84119b14ed7d03a8e887da] <==
	I1020 13:23:07.405071       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:23:07.405440       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 13:23:07.405624       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:23:07.405674       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:23:07.405711       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:23:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:23:07.610858       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:23:07.610876       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:23:07.610885       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:23:07.611202       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:23:37.610792       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 13:23:37.611127       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1020 13:23:37.611217       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:23:37.611738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1020 13:23:39.211147       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:23:39.211182       1 metrics.go:72] Registering metrics
	I1020 13:23:39.211258       1 controller.go:711] "Syncing nftables rules"
	I1020 13:23:47.610915       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 13:23:47.610957       1 main.go:301] handling current node
	I1020 13:23:57.610825       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 13:23:57.610915       1 main.go:301] handling current node
	I1020 13:24:07.612621       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1020 13:24:07.612662       1 main.go:301] handling current node
	
	
	==> kube-apiserver [aa8e7b9b68af423d774d170b1c024dba6f7323fa1d41441cd1e8ee87d1cd0140] <==
	I1020 13:23:06.719756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:23:06.719762       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:23:06.720960       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:23:06.723551       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1020 13:23:06.723589       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 13:23:06.723796       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1020 13:23:06.734147       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:23:06.738212       1 policy_source.go:240] refreshing policies
	I1020 13:23:06.739170       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:23:06.780268       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:23:06.780326       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:23:06.784557       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:23:06.817672       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1020 13:23:06.875238       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:23:07.394289       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:23:08.041314       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:23:08.110737       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:23:08.158586       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:23:08.177693       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:23:08.335032       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.234.67"}
	I1020 13:23:08.381464       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.97.105"}
	E1020 13:23:08.384233       1 repairip.go:372] "Unhandled Error" err="the ClusterIP [IPv4]: 10.96.97.105 for Service kubernetes-dashboard/dashboard-metrics-scraper is not allocated; repairing" logger="UnhandledError"
	I1020 13:23:10.288454       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:23:10.494113       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:23:10.587030       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [631a35129ac4de8ec7ce893c70fd5f816fb79609c9e434d0fb0f0fad3f58552b] <==
	I1020 13:23:10.044678       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:23:10.044733       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:23:10.049412       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:23:10.053740       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:23:10.054935       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:23:10.059221       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 13:23:10.061565       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1020 13:23:10.064420       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 13:23:10.070741       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 13:23:10.071028       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:23:10.080333       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 13:23:10.080636       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 13:23:10.081196       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:23:10.081279       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:23:10.081292       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:23:10.081303       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 13:23:10.081312       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 13:23:10.081324       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1020 13:23:10.081332       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:23:10.091461       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:23:10.092536       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:23:10.098774       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:23:10.098901       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:23:10.098991       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-979197"
	I1020 13:23:10.099039       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [b06d2e7205597130313084e1717d17e5b507cae70710ab71067333cf26a81bff] <==
	I1020 13:23:07.887123       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:23:08.036019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:23:08.145964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:23:08.156543       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 13:23:08.156737       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:23:08.412936       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:23:08.414130       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:23:08.419506       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:23:08.419935       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:23:08.420182       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:23:08.421723       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:23:08.432834       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:23:08.433375       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1020 13:23:08.423583       1 config.go:309] "Starting node config controller"
	I1020 13:23:08.433611       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:23:08.433619       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:23:08.422280       1 config.go:200] "Starting service config controller"
	I1020 13:23:08.433653       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:23:08.422614       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:23:08.433675       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:23:08.433682       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:23:08.534732       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fdf35c27cf71e3c6a3b8814a9f32bced0ae742f30f72aff6760a85b4a3a7145b] <==
	I1020 13:23:03.816102       1 serving.go:386] Generated self-signed cert in-memory
	W1020 13:23:06.773823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1020 13:23:06.773857       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1020 13:23:06.773868       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1020 13:23:06.773876       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1020 13:23:06.842974       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:23:06.843003       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:23:06.845504       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:23:06.845570       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:23:06.845656       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:23:06.845676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:23:06.948050       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:23:11 embed-certs-979197 kubelet[772]: W1020 13:23:11.049719     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/crio-09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29 WatchSource:0}: Error finding container 09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29: Status 404 returned error can't find the container with id 09495344c60fec01a74d03afcec9abff8624a76139cacc492949307fb4b62e29
	Oct 20 13:23:11 embed-certs-979197 kubelet[772]: W1020 13:23:11.067012     772 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/737cd86e9d78626ee28c71a4e169731ee98ccc56c82849ecd7daee2f153b6d5b/crio-4520fcd2ce6f2dba312c0c0e9e970399ee724a548489685001b97ae7bf8cc091 WatchSource:0}: Error finding container 4520fcd2ce6f2dba312c0c0e9e970399ee724a548489685001b97ae7bf8cc091: Status 404 returned error can't find the container with id 4520fcd2ce6f2dba312c0c0e9e970399ee724a548489685001b97ae7bf8cc091
	Oct 20 13:23:16 embed-certs-979197 kubelet[772]: I1020 13:23:16.775616     772 scope.go:117] "RemoveContainer" containerID="3e883a34e108625af730330052ef9c85c72257ed737a8ac60b8bd77056bb88c0"
	Oct 20 13:23:17 embed-certs-979197 kubelet[772]: I1020 13:23:17.777014     772 scope.go:117] "RemoveContainer" containerID="3e883a34e108625af730330052ef9c85c72257ed737a8ac60b8bd77056bb88c0"
	Oct 20 13:23:17 embed-certs-979197 kubelet[772]: I1020 13:23:17.777295     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:17 embed-certs-979197 kubelet[772]: E1020 13:23:17.777438     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:21 embed-certs-979197 kubelet[772]: I1020 13:23:21.021791     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:21 embed-certs-979197 kubelet[772]: E1020 13:23:21.022004     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:32 embed-certs-979197 kubelet[772]: I1020 13:23:32.608711     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:32 embed-certs-979197 kubelet[772]: I1020 13:23:32.819118     772 scope.go:117] "RemoveContainer" containerID="c8e8b0f723b03c344d61cec79065031eaeaca6d71e07441e51fa65989ad98471"
	Oct 20 13:23:33 embed-certs-979197 kubelet[772]: I1020 13:23:33.823300     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:33 embed-certs-979197 kubelet[772]: E1020 13:23:33.823481     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:33 embed-certs-979197 kubelet[772]: I1020 13:23:33.846491     772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-9zg9f" podStartSLOduration=13.605175029 podStartE2EDuration="23.845790601s" podCreationTimestamp="2025-10-20 13:23:10 +0000 UTC" firstStartedPulling="2025-10-20 13:23:11.070098754 +0000 UTC m=+10.657307863" lastFinishedPulling="2025-10-20 13:23:21.310714326 +0000 UTC m=+20.897923435" observedRunningTime="2025-10-20 13:23:21.808673104 +0000 UTC m=+21.395882246" watchObservedRunningTime="2025-10-20 13:23:33.845790601 +0000 UTC m=+33.432999701"
	Oct 20 13:23:37 embed-certs-979197 kubelet[772]: I1020 13:23:37.834493     772 scope.go:117] "RemoveContainer" containerID="e9a76dd7d82fef4db24bc979f481c627369eff8a5527c155e7498370b3f8a2c7"
	Oct 20 13:23:41 embed-certs-979197 kubelet[772]: I1020 13:23:41.021840     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:41 embed-certs-979197 kubelet[772]: E1020 13:23:41.022005     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: I1020 13:23:55.606213     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: I1020 13:23:55.884787     772 scope.go:117] "RemoveContainer" containerID="a738b1c11e7f8173e8cd7592c1e87607249abec3c26812157ce6886bd8544123"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: I1020 13:23:55.885118     772 scope.go:117] "RemoveContainer" containerID="626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190"
	Oct 20 13:23:55 embed-certs-979197 kubelet[772]: E1020 13:23:55.885282     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:24:01 embed-certs-979197 kubelet[772]: I1020 13:24:01.021865     772 scope.go:117] "RemoveContainer" containerID="626170f6984d91bac964bd18c726874f80d16de3a9fc62b06ce808368e62b190"
	Oct 20 13:24:01 embed-certs-979197 kubelet[772]: E1020 13:24:01.022067     772 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dttk6_kubernetes-dashboard(bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dttk6" podUID="bea2ad3c-1e1f-4fbe-91ef-3213e1ecbeb2"
	Oct 20 13:24:02 embed-certs-979197 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:24:03 embed-certs-979197 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:24:03 embed-certs-979197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9f363f84681d1a5440f5360011859037100e536b500edc9635f8c9c0b5efa08f] <==
	2025/10/20 13:23:21 Using namespace: kubernetes-dashboard
	2025/10/20 13:23:21 Using in-cluster config to connect to apiserver
	2025/10/20 13:23:21 Using secret token for csrf signing
	2025/10/20 13:23:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:23:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:23:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 13:23:21 Generating JWE encryption key
	2025/10/20 13:23:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:23:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:23:22 Initializing JWE encryption key from synchronized object
	2025/10/20 13:23:22 Creating in-cluster Sidecar client
	2025/10/20 13:23:22 Serving insecurely on HTTP port: 9090
	2025/10/20 13:23:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:23:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:23:21 Starting overwatch
	
	
	==> storage-provisioner [a04ddd0eaf35fc812a2c522888d81d485b2f3a10f3d187d544c6d233a6aec6e0] <==
	W1020 13:23:45.648257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:49.247286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:52.301325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:55.327723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:55.335751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:55.335888       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:23:55.336415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"390cf0fd-e9c8-4ac9-a37f-95614000b7ae", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-979197_cab80016-8017-48ca-b77d-ae0221310200 became leader
	I1020 13:23:55.341029       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-979197_cab80016-8017-48ca-b77d-ae0221310200!
	W1020 13:23:55.349943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:55.358624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:23:55.441353       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-979197_cab80016-8017-48ca-b77d-ae0221310200!
	W1020 13:23:57.362560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:57.372343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:59.377921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:23:59.386307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:01.389320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:01.396573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:03.400128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:03.412591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:05.419375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:05.427101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:07.432568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:07.442436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:09.451890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:24:09.461415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e9a76dd7d82fef4db24bc979f481c627369eff8a5527c155e7498370b3f8a2c7] <==
	I1020 13:23:07.396920       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:23:37.399915       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-979197 -n embed-certs-979197
E1020 13:24:10.055049  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-979197 -n embed-certs-979197: exit status 2 (455.10322ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-979197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1020 13:24:56.300506  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.662471ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:24:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-018730
helpers_test.go:243: (dbg) docker inspect newest-cni-018730:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7",
	        "Created": "2025-10-20T13:24:20.016324566Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500249,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:24:20.088725046Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/hosts",
	        "LogPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7-json.log",
	        "Name": "/newest-cni-018730",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-018730:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-018730",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7",
	                "LowerDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-018730",
	                "Source": "/var/lib/docker/volumes/newest-cni-018730/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-018730",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-018730",
	                "name.minikube.sigs.k8s.io": "newest-cni-018730",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c06f33e57db4ee3bd2805315cb66905fb73897b052a7918220b3e9c1480070b",
	            "SandboxKey": "/var/run/docker/netns/0c06f33e57db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-018730": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:84:7e:07:4c:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f69e85737164e79f4c4847958c72fc64125c4b0702605f11df6e4b774d799d40",
	                    "EndpointID": "d6db9e8bf010664ddf8c0235a0653ef445e19995a7d498c159cec992710d99af",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-018730",
	                        "b3c52ddf59c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-018730 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-018730 logs -n 25: (1.12457313s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ delete  │ -p old-k8s-version-995203                                                                                                                                                                                                                     │ old-k8s-version-995203       │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:20 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:20 UTC │ 20 Oct 25 13:21 UTC │
	│ delete  │ -p cert-expiration-066011                                                                                                                                                                                                                     │ cert-expiration-066011       │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:21 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:24:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:24:13.929393  499861 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:24:13.929539  499861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:24:13.929550  499861 out.go:374] Setting ErrFile to fd 2...
	I1020 13:24:13.929556  499861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:24:13.929924  499861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:24:13.930417  499861 out.go:368] Setting JSON to false
	I1020 13:24:13.931352  499861 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11204,"bootTime":1760955450,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:24:13.931450  499861 start.go:141] virtualization:  
	I1020 13:24:13.935653  499861 out.go:179] * [newest-cni-018730] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:24:13.939986  499861 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:24:13.940148  499861 notify.go:220] Checking for updates...
	I1020 13:24:13.946521  499861 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:24:13.949708  499861 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:24:13.952871  499861 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:24:13.955886  499861 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:24:13.959181  499861 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:24:13.962562  499861 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:24:13.962658  499861 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:24:14.010257  499861 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:24:14.010402  499861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:24:14.120119  499861 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:24:14.106751655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:24:14.120221  499861 docker.go:318] overlay module found
	I1020 13:24:14.126410  499861 out.go:179] * Using the docker driver based on user configuration
	I1020 13:24:14.129369  499861 start.go:305] selected driver: docker
	I1020 13:24:14.129385  499861 start.go:925] validating driver "docker" against <nil>
	I1020 13:24:14.129399  499861 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:24:14.130215  499861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:24:14.232474  499861 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:24:14.219193221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:24:14.232643  499861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1020 13:24:14.232668  499861 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1020 13:24:14.232960  499861 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 13:24:14.238507  499861 out.go:179] * Using Docker driver with root privileges
	I1020 13:24:14.241351  499861 cni.go:84] Creating CNI manager for ""
	I1020 13:24:14.241423  499861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:24:14.241434  499861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:24:14.241512  499861 start.go:349] cluster config:
	{Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:24:14.244692  499861 out.go:179] * Starting "newest-cni-018730" primary control-plane node in "newest-cni-018730" cluster
	I1020 13:24:14.247437  499861 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:24:14.250366  499861 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:24:09.440271  495732 out.go:252]   - Booting up control plane ...
	I1020 13:24:09.440411  495732 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 13:24:09.440496  495732 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 13:24:09.440566  495732 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 13:24:09.461812  495732 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 13:24:09.461924  495732 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 13:24:09.473027  495732 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 13:24:09.473688  495732 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 13:24:09.473760  495732 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 13:24:09.685254  495732 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 13:24:09.685389  495732 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 13:24:10.688817  495732 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001891941s
	I1020 13:24:10.690312  495732 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 13:24:10.690754  495732 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1020 13:24:10.691082  495732 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 13:24:10.691382  495732 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1020 13:24:14.253187  499861 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:24:14.253259  499861 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:24:14.253268  499861 cache.go:58] Caching tarball of preloaded images
	I1020 13:24:14.253350  499861 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:24:14.253360  499861 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:24:14.253466  499861 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/config.json ...
	I1020 13:24:14.253486  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/config.json: {Name:mk99ea1eb8a0111517b601af69574a29c10d9f10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:14.253647  499861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:24:14.277029  499861 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:24:14.277057  499861 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:24:14.277072  499861 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:24:14.277095  499861 start.go:360] acquireMachinesLock for newest-cni-018730: {Name:mke4ea61e223de4e71dff13c842eb038a598c816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:24:14.277213  499861 start.go:364] duration metric: took 96.518µs to acquireMachinesLock for "newest-cni-018730"
	I1020 13:24:14.277244  499861 start.go:93] Provisioning new machine with config: &{Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:24:14.277323  499861 start.go:125] createHost starting for "" (driver="docker")
	I1020 13:24:14.280675  499861 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 13:24:14.280913  499861 start.go:159] libmachine.API.Create for "newest-cni-018730" (driver="docker")
	I1020 13:24:14.280957  499861 client.go:168] LocalClient.Create starting
	I1020 13:24:14.281032  499861 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 13:24:14.281073  499861 main.go:141] libmachine: Decoding PEM data...
	I1020 13:24:14.281086  499861 main.go:141] libmachine: Parsing certificate...
	I1020 13:24:14.281140  499861 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 13:24:14.281156  499861 main.go:141] libmachine: Decoding PEM data...
	I1020 13:24:14.281165  499861 main.go:141] libmachine: Parsing certificate...
	I1020 13:24:14.281522  499861 cli_runner.go:164] Run: docker network inspect newest-cni-018730 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 13:24:14.312129  499861 cli_runner.go:211] docker network inspect newest-cni-018730 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 13:24:14.312209  499861 network_create.go:284] running [docker network inspect newest-cni-018730] to gather additional debugging logs...
	I1020 13:24:14.312228  499861 cli_runner.go:164] Run: docker network inspect newest-cni-018730
	W1020 13:24:14.360583  499861 cli_runner.go:211] docker network inspect newest-cni-018730 returned with exit code 1
	I1020 13:24:14.360614  499861 network_create.go:287] error running [docker network inspect newest-cni-018730]: docker network inspect newest-cni-018730: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-018730 not found
	I1020 13:24:14.360628  499861 network_create.go:289] output of [docker network inspect newest-cni-018730]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-018730 not found
	
	** /stderr **
	I1020 13:24:14.360740  499861 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:24:14.396996  499861 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31214b196961 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:99:57:10:1b:40} reservation:<nil>}
	I1020 13:24:14.397256  499861 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf6e9e751b4a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:0d:2b:68:24:bc} reservation:<nil>}
	I1020 13:24:14.397619  499861 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-076921d0625d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:c5:51:b1:3d:0c} reservation:<nil>}
	I1020 13:24:14.397936  499861 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-307dee052f6f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:bd:c0:83:5e:74} reservation:<nil>}
	I1020 13:24:14.398329  499861 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ebf00}
	I1020 13:24:14.398346  499861 network_create.go:124] attempt to create docker network newest-cni-018730 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1020 13:24:14.398400  499861 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-018730 newest-cni-018730
	I1020 13:24:14.495578  499861 network_create.go:108] docker network newest-cni-018730 192.168.85.0/24 created
	I1020 13:24:14.495615  499861 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-018730" container
	I1020 13:24:14.495690  499861 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 13:24:14.527249  499861 cli_runner.go:164] Run: docker volume create newest-cni-018730 --label name.minikube.sigs.k8s.io=newest-cni-018730 --label created_by.minikube.sigs.k8s.io=true
	I1020 13:24:14.557537  499861 oci.go:103] Successfully created a docker volume newest-cni-018730
	I1020 13:24:14.557623  499861 cli_runner.go:164] Run: docker run --rm --name newest-cni-018730-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-018730 --entrypoint /usr/bin/test -v newest-cni-018730:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 13:24:15.245801  499861 oci.go:107] Successfully prepared a docker volume newest-cni-018730
	I1020 13:24:15.245850  499861 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:24:15.245871  499861 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 13:24:15.245939  499861 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-018730:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 13:24:16.877222  495732 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.185239862s
	I1020 13:24:17.953292  495732 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.261482176s
	I1020 13:24:20.693283  495732 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.00200863s
	I1020 13:24:20.724824  495732 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 13:24:20.751212  495732 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 13:24:20.782530  495732 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 13:24:20.782737  495732 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-744804 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 13:24:20.825133  495732 kubeadm.go:318] [bootstrap-token] Using token: ie0vzi.x24sznfrg3xzo0rk
	I1020 13:24:20.828082  495732 out.go:252]   - Configuring RBAC rules ...
	I1020 13:24:20.828228  495732 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 13:24:20.838793  495732 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 13:24:20.881291  495732 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 13:24:20.908664  495732 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 13:24:20.924566  495732 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 13:24:20.935622  495732 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 13:24:21.152321  495732 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 13:24:21.690018  495732 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 13:24:22.113319  495732 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 13:24:22.114889  495732 kubeadm.go:318] 
	I1020 13:24:22.114971  495732 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 13:24:22.114979  495732 kubeadm.go:318] 
	I1020 13:24:22.115059  495732 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 13:24:22.115064  495732 kubeadm.go:318] 
	I1020 13:24:22.115091  495732 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 13:24:22.116336  495732 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 13:24:22.116412  495732 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 13:24:22.116418  495732 kubeadm.go:318] 
	I1020 13:24:22.116476  495732 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 13:24:22.116480  495732 kubeadm.go:318] 
	I1020 13:24:22.116529  495732 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 13:24:22.116534  495732 kubeadm.go:318] 
	I1020 13:24:22.116588  495732 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 13:24:22.116667  495732 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 13:24:22.116738  495732 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 13:24:22.116743  495732 kubeadm.go:318] 
	I1020 13:24:22.116860  495732 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 13:24:22.117096  495732 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 13:24:22.117105  495732 kubeadm.go:318] 
	I1020 13:24:22.117194  495732 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ie0vzi.x24sznfrg3xzo0rk \
	I1020 13:24:22.117302  495732 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 \
	I1020 13:24:22.117324  495732 kubeadm.go:318] 	--control-plane 
	I1020 13:24:22.117328  495732 kubeadm.go:318] 
	I1020 13:24:22.117417  495732 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 13:24:22.117421  495732 kubeadm.go:318] 
	I1020 13:24:22.117508  495732 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ie0vzi.x24sznfrg3xzo0rk \
	I1020 13:24:22.117616  495732 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 
	I1020 13:24:22.135761  495732 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1020 13:24:22.135998  495732 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 13:24:22.136107  495732 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 13:24:22.136122  495732 cni.go:84] Creating CNI manager for ""
	I1020 13:24:22.136130  495732 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:24:22.140746  495732 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 13:24:19.919028  499861 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-018730:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.67305444s)
	I1020 13:24:19.919063  499861 kic.go:203] duration metric: took 4.673189088s to extract preloaded images to volume ...
	W1020 13:24:19.919200  499861 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 13:24:19.919311  499861 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 13:24:19.997899  499861 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-018730 --name newest-cni-018730 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-018730 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-018730 --network newest-cni-018730 --ip 192.168.85.2 --volume newest-cni-018730:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 13:24:20.384786  499861 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Running}}
	I1020 13:24:20.409790  499861 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:24:20.435261  499861 cli_runner.go:164] Run: docker exec newest-cni-018730 stat /var/lib/dpkg/alternatives/iptables
	I1020 13:24:20.504948  499861 oci.go:144] the created container "newest-cni-018730" has a running status.
	I1020 13:24:20.504975  499861 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa...
	I1020 13:24:21.965595  499861 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 13:24:21.992786  499861 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:24:22.019167  499861 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 13:24:22.019189  499861 kic_runner.go:114] Args: [docker exec --privileged newest-cni-018730 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 13:24:22.085114  499861 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:24:22.121326  499861 machine.go:93] provisionDockerMachine start ...
	I1020 13:24:22.121420  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:22.158263  499861 main.go:141] libmachine: Using SSH client type: native
	I1020 13:24:22.158597  499861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1020 13:24:22.158607  499861 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:24:22.162572  499861 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:24:22.146486  495732 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 13:24:22.151913  495732 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 13:24:22.151938  495732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 13:24:22.217052  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 13:24:22.699632  495732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 13:24:22.699766  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:22.699837  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-744804 minikube.k8s.io/updated_at=2025_10_20T13_24_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=no-preload-744804 minikube.k8s.io/primary=true
	I1020 13:24:22.907319  495732 ops.go:34] apiserver oom_adj: -16
	I1020 13:24:22.907434  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:23.408195  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:23.908446  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:24.408532  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:24.908069  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:25.407558  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:25.907595  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:26.408069  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:26.907958  495732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:27.082634  495732 kubeadm.go:1113] duration metric: took 4.382916199s to wait for elevateKubeSystemPrivileges
	I1020 13:24:27.082665  495732 kubeadm.go:402] duration metric: took 27.349107923s to StartCluster
	I1020 13:24:27.082683  495732 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:27.082743  495732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:24:27.083434  495732 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:27.083654  495732 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:24:27.083812  495732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 13:24:27.084061  495732 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:24:27.084094  495732 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:24:27.084151  495732 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744804"
	I1020 13:24:27.084166  495732 addons.go:238] Setting addon storage-provisioner=true in "no-preload-744804"
	I1020 13:24:27.084187  495732 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:24:27.085036  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:24:27.085211  495732 addons.go:69] Setting default-storageclass=true in profile "no-preload-744804"
	I1020 13:24:27.085229  495732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744804"
	I1020 13:24:27.085488  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:24:27.092443  495732 out.go:179] * Verifying Kubernetes components...
	I1020 13:24:27.095382  495732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:24:27.136198  495732 addons.go:238] Setting addon default-storageclass=true in "no-preload-744804"
	I1020 13:24:27.136241  495732 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:24:27.136859  495732 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:24:27.182351  495732 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:24:27.182373  495732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:24:27.183010  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:24:27.200507  495732 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:24:27.203657  495732 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:24:27.203683  495732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:24:27.203754  495732 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:24:27.263973  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:24:27.272493  495732 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:24:27.482102  495732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 13:24:27.564537  495732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:24:27.625994  495732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:24:27.637990  495732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:24:28.353320  495732 node_ready.go:35] waiting up to 6m0s for node "no-preload-744804" to be "Ready" ...
	I1020 13:24:28.354776  495732 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1020 13:24:28.789843  495732 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.151812361s)
	I1020 13:24:28.793550  495732 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1020 13:24:25.323834  499861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-018730
	
	I1020 13:24:25.323865  499861 ubuntu.go:182] provisioning hostname "newest-cni-018730"
	I1020 13:24:25.323942  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:25.344258  499861 main.go:141] libmachine: Using SSH client type: native
	I1020 13:24:25.344728  499861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1020 13:24:25.344746  499861 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-018730 && echo "newest-cni-018730" | sudo tee /etc/hostname
	I1020 13:24:25.515244  499861 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-018730
	
	I1020 13:24:25.515328  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:25.535759  499861 main.go:141] libmachine: Using SSH client type: native
	I1020 13:24:25.536075  499861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1020 13:24:25.536098  499861 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-018730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-018730/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-018730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:24:25.689179  499861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:24:25.689210  499861 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:24:25.689267  499861 ubuntu.go:190] setting up certificates
	I1020 13:24:25.689282  499861 provision.go:84] configureAuth start
	I1020 13:24:25.689360  499861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:24:25.707507  499861 provision.go:143] copyHostCerts
	I1020 13:24:25.707577  499861 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:24:25.707630  499861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:24:25.707728  499861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:24:25.707852  499861 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:24:25.707866  499861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:24:25.707968  499861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:24:25.708054  499861 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:24:25.708067  499861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:24:25.708101  499861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:24:25.708164  499861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.newest-cni-018730 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-018730]
	I1020 13:24:27.380426  499861 provision.go:177] copyRemoteCerts
	I1020 13:24:27.380551  499861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:24:27.380599  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:27.398985  499861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:24:27.524745  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:24:27.553720  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:24:27.586502  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:24:27.622866  499861 provision.go:87] duration metric: took 1.933560734s to configureAuth
	I1020 13:24:27.622940  499861 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:24:27.623173  499861 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:24:27.623327  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:27.658222  499861 main.go:141] libmachine: Using SSH client type: native
	I1020 13:24:27.658525  499861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1020 13:24:27.658540  499861 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:24:28.038992  499861 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:24:28.039065  499861 machine.go:96] duration metric: took 5.917720005s to provisionDockerMachine
	I1020 13:24:28.039091  499861 client.go:171] duration metric: took 13.75812699s to LocalClient.Create
	I1020 13:24:28.039148  499861 start.go:167] duration metric: took 13.758214376s to libmachine.API.Create "newest-cni-018730"
	I1020 13:24:28.039173  499861 start.go:293] postStartSetup for "newest-cni-018730" (driver="docker")
	I1020 13:24:28.039198  499861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:24:28.039312  499861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:24:28.039391  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:28.073392  499861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:24:28.194262  499861 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:24:28.203459  499861 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:24:28.203492  499861 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:24:28.203504  499861 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:24:28.203572  499861 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:24:28.203663  499861 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:24:28.203779  499861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:24:28.218530  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:24:28.257579  499861 start.go:296] duration metric: took 218.377885ms for postStartSetup
	I1020 13:24:28.257966  499861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:24:28.283601  499861 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/config.json ...
	I1020 13:24:28.283914  499861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:24:28.283967  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:28.312903  499861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:24:28.422403  499861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:24:28.429042  499861 start.go:128] duration metric: took 14.151703565s to createHost
	I1020 13:24:28.429068  499861 start.go:83] releasing machines lock for "newest-cni-018730", held for 14.151840797s
	I1020 13:24:28.429147  499861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:24:28.471348  499861 ssh_runner.go:195] Run: cat /version.json
	I1020 13:24:28.471415  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:28.471629  499861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:24:28.471684  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:28.491775  499861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:24:28.514018  499861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:24:28.612517  499861 ssh_runner.go:195] Run: systemctl --version
	I1020 13:24:28.743581  499861 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:24:28.803210  499861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:24:28.809315  499861 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:24:28.809396  499861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:24:28.840660  499861 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 13:24:28.840693  499861 start.go:495] detecting cgroup driver to use...
	I1020 13:24:28.840726  499861 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:24:28.840781  499861 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:24:28.864321  499861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:24:28.878229  499861 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:24:28.878303  499861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:24:28.895235  499861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:24:28.913922  499861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:24:28.796429  495732 addons.go:514] duration metric: took 1.712312069s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1020 13:24:28.862809  495732 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-744804" context rescaled to 1 replicas
	I1020 13:24:29.067168  499861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:24:29.200791  499861 docker.go:234] disabling docker service ...
	I1020 13:24:29.200873  499861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:24:29.224423  499861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:24:29.238893  499861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:24:29.359514  499861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:24:29.481598  499861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:24:29.494372  499861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:24:29.508991  499861 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:24:29.509065  499861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:24:29.518522  499861 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:24:29.518607  499861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:24:29.528103  499861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:24:29.538052  499861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:24:29.546991  499861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:24:29.556496  499861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:24:29.565763  499861 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:24:29.580204  499861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:24:29.589431  499861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:24:29.597688  499861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:24:29.605147  499861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:24:29.730031  499861 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:24:29.864679  499861 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:24:29.864760  499861 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:24:29.869309  499861 start.go:563] Will wait 60s for crictl version
	I1020 13:24:29.869377  499861 ssh_runner.go:195] Run: which crictl
	I1020 13:24:29.873357  499861 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:24:29.902842  499861 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:24:29.902934  499861 ssh_runner.go:195] Run: crio --version
	I1020 13:24:29.933734  499861 ssh_runner.go:195] Run: crio --version
	I1020 13:24:29.966378  499861 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:24:29.969310  499861 cli_runner.go:164] Run: docker network inspect newest-cni-018730 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:24:29.986120  499861 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 13:24:29.989742  499861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:24:30.002450  499861 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1020 13:24:30.010846  499861 kubeadm.go:883] updating cluster {Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:24:30.011066  499861 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:24:30.011169  499861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:24:30.098008  499861 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:24:30.098037  499861 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:24:30.098102  499861 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:24:30.126717  499861 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:24:30.126745  499861 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:24:30.126754  499861 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 13:24:30.126875  499861 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-018730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:24:30.126972  499861 ssh_runner.go:195] Run: crio config
	I1020 13:24:30.184421  499861 cni.go:84] Creating CNI manager for ""
	I1020 13:24:30.184447  499861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:24:30.184466  499861 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1020 13:24:30.184491  499861 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-018730 NodeName:newest-cni-018730 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:24:30.184624  499861 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-018730"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:24:30.184708  499861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:24:30.193556  499861 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:24:30.193631  499861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:24:30.202011  499861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 13:24:30.215750  499861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:24:30.229289  499861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1020 13:24:30.245184  499861 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:24:30.249295  499861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:24:30.259866  499861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:24:30.390924  499861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:24:30.406844  499861 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730 for IP: 192.168.85.2
	I1020 13:24:30.406865  499861 certs.go:195] generating shared ca certs ...
	I1020 13:24:30.406880  499861 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:30.407037  499861 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:24:30.407084  499861 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:24:30.407096  499861 certs.go:257] generating profile certs ...
	I1020 13:24:30.407164  499861 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.key
	I1020 13:24:30.407181  499861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.crt with IP's: []
	I1020 13:24:30.863704  499861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.crt ...
	I1020 13:24:30.863736  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.crt: {Name:mk3780952d31c0a4f5e29ea17501111f1cc9eec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:30.863991  499861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.key ...
	I1020 13:24:30.864008  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.key: {Name:mk15d0af8e8fc33e14c4a7afb121764c8d108097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:30.864123  499861 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key.b19b56d0
	I1020 13:24:30.864140  499861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt.b19b56d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1020 13:24:31.375571  499861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt.b19b56d0 ...
	I1020 13:24:31.375601  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt.b19b56d0: {Name:mk17b9be8b05453cad22b6bc4fa175d8f37b662a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:31.375842  499861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key.b19b56d0 ...
	I1020 13:24:31.375861  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key.b19b56d0: {Name:mkd317cffc5f69824b838936f5a60838a9e2ab31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:31.375961  499861 certs.go:382] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt.b19b56d0 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt
	I1020 13:24:31.376040  499861 certs.go:386] copying /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key.b19b56d0 -> /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key
	I1020 13:24:31.376105  499861 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key
	I1020 13:24:31.376123  499861 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.crt with IP's: []
	I1020 13:24:31.459620  499861 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.crt ...
	I1020 13:24:31.459649  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.crt: {Name:mk445b97ee1163c6d2ec3e18badf6e15fbe16d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:31.459832  499861 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key ...
	I1020 13:24:31.459845  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key: {Name:mk380778d566aaa64ac36d37be23b090a80739a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:31.460031  499861 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:24:31.460074  499861 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:24:31.460088  499861 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:24:31.460112  499861 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:24:31.460139  499861 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:24:31.460163  499861 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:24:31.460213  499861 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:24:31.460829  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:24:31.480241  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:24:31.499357  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:24:31.519549  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:24:31.540053  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 13:24:31.559335  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:24:31.579696  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:24:31.601056  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 13:24:31.621202  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:24:31.639867  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:24:31.657554  499861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:24:31.682925  499861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:24:31.700661  499861 ssh_runner.go:195] Run: openssl version
	I1020 13:24:31.707369  499861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:24:31.716739  499861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:24:31.721056  499861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:24:31.721126  499861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:24:31.762492  499861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:24:31.772411  499861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:24:31.781809  499861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:24:31.786071  499861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:24:31.786193  499861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:24:31.828171  499861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:24:31.837019  499861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:24:31.845902  499861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:24:31.850122  499861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:24:31.850195  499861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:24:31.892684  499861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:24:31.902237  499861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:24:31.905695  499861 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1020 13:24:31.905749  499861 kubeadm.go:400] StartCluster: {Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:24:31.905834  499861 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:24:31.905897  499861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:24:31.933700  499861 cri.go:89] found id: ""
	I1020 13:24:31.933841  499861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:24:31.941671  499861 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1020 13:24:31.949613  499861 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1020 13:24:31.949734  499861 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1020 13:24:31.958590  499861 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1020 13:24:31.958650  499861 kubeadm.go:157] found existing configuration files:
	
	I1020 13:24:31.958725  499861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1020 13:24:31.968548  499861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1020 13:24:31.968627  499861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1020 13:24:31.975999  499861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1020 13:24:31.983798  499861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1020 13:24:31.983892  499861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1020 13:24:31.991332  499861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1020 13:24:31.999344  499861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1020 13:24:31.999482  499861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1020 13:24:32.014367  499861 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1020 13:24:32.029592  499861 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1020 13:24:32.029684  499861 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1020 13:24:32.037641  499861 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1020 13:24:32.113144  499861 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1020 13:24:32.113564  499861 kubeadm.go:318] [preflight] Running pre-flight checks
	I1020 13:24:32.163712  499861 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1020 13:24:32.163798  499861 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1020 13:24:32.163847  499861 kubeadm.go:318] OS: Linux
	I1020 13:24:32.163922  499861 kubeadm.go:318] CGROUPS_CPU: enabled
	I1020 13:24:32.163977  499861 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1020 13:24:32.164036  499861 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1020 13:24:32.164101  499861 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1020 13:24:32.164167  499861 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1020 13:24:32.164235  499861 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1020 13:24:32.164291  499861 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1020 13:24:32.164352  499861 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1020 13:24:32.164439  499861 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1020 13:24:32.238966  499861 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1020 13:24:32.239169  499861 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1020 13:24:32.239314  499861 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1020 13:24:32.247655  499861 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1020 13:24:32.252619  499861 out.go:252]   - Generating certificates and keys ...
	I1020 13:24:32.252777  499861 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1020 13:24:32.252889  499861 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1020 13:24:32.455253  499861 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1020 13:24:32.838049  499861 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1020 13:24:33.045049  499861 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1020 13:24:33.452200  499861 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1020 13:24:33.563657  499861 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1020 13:24:33.564082  499861 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-018730] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1020 13:24:30.357041  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:24:32.357337  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:24:33.936239  499861 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1020 13:24:33.936429  499861 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-018730] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1020 13:24:34.087262  499861 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1020 13:24:35.435917  499861 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1020 13:24:36.080152  499861 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1020 13:24:36.080385  499861 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1020 13:24:36.647103  499861 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1020 13:24:37.005292  499861 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1020 13:24:37.718692  499861 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	W1020 13:24:34.857879  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:24:37.356817  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:24:39.393592  499861 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1020 13:24:39.622088  499861 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1020 13:24:39.623106  499861 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1020 13:24:39.626150  499861 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1020 13:24:39.629545  499861 out.go:252]   - Booting up control plane ...
	I1020 13:24:39.629649  499861 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1020 13:24:39.629731  499861 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1020 13:24:39.630931  499861 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1020 13:24:39.653771  499861 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1020 13:24:39.654106  499861 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1020 13:24:39.661827  499861 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1020 13:24:39.662194  499861 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1020 13:24:39.662259  499861 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1020 13:24:39.798299  499861 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1020 13:24:39.798435  499861 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1020 13:24:40.835656  499861 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.037842542s
	I1020 13:24:40.839672  499861 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1020 13:24:40.839771  499861 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1020 13:24:40.840096  499861 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1020 13:24:40.840186  499861 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1020 13:24:39.357259  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:24:41.357668  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:24:43.857103  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:24:44.459390  499861 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.619085783s
	I1020 13:24:46.417481  499861 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.57773956s
	I1020 13:24:48.341869  499861 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.502081593s
	I1020 13:24:48.365657  499861 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 13:24:48.385589  499861 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 13:24:48.409714  499861 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 13:24:48.409926  499861 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-018730 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 13:24:48.423511  499861 kubeadm.go:318] [bootstrap-token] Using token: 5qy7of.lki6xahp0426zkgq
	I1020 13:24:48.426460  499861 out.go:252]   - Configuring RBAC rules ...
	I1020 13:24:48.426594  499861 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 13:24:48.437469  499861 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 13:24:48.449263  499861 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 13:24:48.455483  499861 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 13:24:48.461185  499861 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 13:24:48.470611  499861 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 13:24:48.749318  499861 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	W1020 13:24:46.355888  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:24:48.356598  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:24:49.220904  499861 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 13:24:49.750501  499861 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 13:24:49.751716  499861 kubeadm.go:318] 
	I1020 13:24:49.751792  499861 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 13:24:49.751803  499861 kubeadm.go:318] 
	I1020 13:24:49.751889  499861 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 13:24:49.751897  499861 kubeadm.go:318] 
	I1020 13:24:49.751924  499861 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 13:24:49.751989  499861 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 13:24:49.752046  499861 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 13:24:49.752054  499861 kubeadm.go:318] 
	I1020 13:24:49.752111  499861 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 13:24:49.752119  499861 kubeadm.go:318] 
	I1020 13:24:49.752169  499861 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 13:24:49.752177  499861 kubeadm.go:318] 
	I1020 13:24:49.752230  499861 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 13:24:49.752312  499861 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 13:24:49.752413  499861 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 13:24:49.752422  499861 kubeadm.go:318] 
	I1020 13:24:49.752510  499861 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 13:24:49.752594  499861 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 13:24:49.752604  499861 kubeadm.go:318] 
	I1020 13:24:49.752691  499861 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5qy7of.lki6xahp0426zkgq \
	I1020 13:24:49.752803  499861 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 \
	I1020 13:24:49.752828  499861 kubeadm.go:318] 	--control-plane 
	I1020 13:24:49.752839  499861 kubeadm.go:318] 
	I1020 13:24:49.752931  499861 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 13:24:49.752942  499861 kubeadm.go:318] 
	I1020 13:24:49.753033  499861 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5qy7of.lki6xahp0426zkgq \
	I1020 13:24:49.753143  499861 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 
	I1020 13:24:49.756706  499861 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1020 13:24:49.756943  499861 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 13:24:49.757051  499861 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 13:24:49.757071  499861 cni.go:84] Creating CNI manager for ""
	I1020 13:24:49.757080  499861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:24:49.762115  499861 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 13:24:49.765038  499861 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 13:24:49.769220  499861 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 13:24:49.769245  499861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 13:24:49.787050  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 13:24:50.128171  499861 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 13:24:50.128340  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:50.128471  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-018730 minikube.k8s.io/updated_at=2025_10_20T13_24_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=newest-cni-018730 minikube.k8s.io/primary=true
	I1020 13:24:50.282454  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:50.282518  499861 ops.go:34] apiserver oom_adj: -16
	I1020 13:24:50.782559  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:51.283124  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:51.782612  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:52.282545  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:52.782685  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:53.283296  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:53.782496  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1020 13:24:50.856675  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:24:52.857036  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:24:54.282815  499861 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:24:54.440735  499861 kubeadm.go:1113] duration metric: took 4.312465583s to wait for elevateKubeSystemPrivileges
	I1020 13:24:54.440769  499861 kubeadm.go:402] duration metric: took 22.535022321s to StartCluster
	I1020 13:24:54.440786  499861 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:54.440852  499861 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:24:54.441842  499861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:24:54.442060  499861 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:24:54.442158  499861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 13:24:54.442414  499861 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:24:54.442456  499861 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:24:54.442518  499861 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-018730"
	I1020 13:24:54.442533  499861 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-018730"
	I1020 13:24:54.442557  499861 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:24:54.443047  499861 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:24:54.443651  499861 addons.go:69] Setting default-storageclass=true in profile "newest-cni-018730"
	I1020 13:24:54.443671  499861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-018730"
	I1020 13:24:54.444010  499861 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:24:54.446377  499861 out.go:179] * Verifying Kubernetes components...
	I1020 13:24:54.450160  499861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:24:54.485032  499861 addons.go:238] Setting addon default-storageclass=true in "newest-cni-018730"
	I1020 13:24:54.485074  499861 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:24:54.485486  499861 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:24:54.504831  499861 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:24:54.509431  499861 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:24:54.509454  499861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:24:54.509518  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:54.528679  499861 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:24:54.528700  499861 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:24:54.528775  499861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:24:54.543731  499861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:24:54.570760  499861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:24:54.933017  499861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:24:54.937258  499861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:24:54.949506  499861 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:24:54.949707  499861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 13:24:55.672875  499861 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1020 13:24:55.674417  499861 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:24:55.674511  499861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:24:55.699652  499861 api_server.go:72] duration metric: took 1.257554475s to wait for apiserver process to appear ...
	I1020 13:24:55.699717  499861 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:24:55.699750  499861 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:24:55.716752  499861 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1020 13:24:55.719141  499861 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:24:55.719668  499861 addons.go:514] duration metric: took 1.277196237s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1020 13:24:55.720214  499861 api_server.go:141] control plane version: v1.34.1
	I1020 13:24:55.720272  499861 api_server.go:131] duration metric: took 20.534721ms to wait for apiserver health ...
	I1020 13:24:55.720298  499861 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:24:55.725789  499861 system_pods.go:59] 9 kube-system pods found
	I1020 13:24:55.725892  499861 system_pods.go:61] "coredns-66bc5c9577-sjxcr" [8a89d2c7-108a-4c4b-9f12-c918862fa04a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 13:24:55.725919  499861 system_pods.go:61] "coredns-66bc5c9577-xsws7" [b0cdf263-dba8-4c44-830f-1093b9424761] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 13:24:55.725958  499861 system_pods.go:61] "etcd-newest-cni-018730" [19f3b3ce-69b4-4765-b079-797644b0d529] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:24:55.725986  499861 system_pods.go:61] "kindnet-znl5b" [5a6ca5b2-16cc-4b03-a59f-b1867665c8c8] Running
	I1020 13:24:55.726013  499861 system_pods.go:61] "kube-apiserver-newest-cni-018730" [ba8650fa-d31b-4e19-b6ed-806262274ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:24:55.726047  499861 system_pods.go:61] "kube-controller-manager-newest-cni-018730" [f4e1cbc5-da99-4dcc-8fa1-465324888375] Running
	I1020 13:24:55.726067  499861 system_pods.go:61] "kube-proxy-cfrgk" [2b049f68-632a-4288-9c43-da5c3d72e46f] Running
	I1020 13:24:55.726090  499861 system_pods.go:61] "kube-scheduler-newest-cni-018730" [0b6f56c6-4faa-479c-b013-20b4ea5e1c5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:24:55.726124  499861 system_pods.go:61] "storage-provisioner" [81054ccb-6cd7-47f4-8244-a5a5df3d6ca1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 13:24:55.726143  499861 system_pods.go:74] duration metric: took 5.826564ms to wait for pod list to return data ...
	I1020 13:24:55.726164  499861 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:24:55.732400  499861 default_sa.go:45] found service account: "default"
	I1020 13:24:55.732462  499861 default_sa.go:55] duration metric: took 6.258422ms for default service account to be created ...
	I1020 13:24:55.732498  499861 kubeadm.go:586] duration metric: took 1.290406152s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 13:24:55.732529  499861 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:24:55.740217  499861 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:24:55.740295  499861 node_conditions.go:123] node cpu capacity is 2
	I1020 13:24:55.740323  499861 node_conditions.go:105] duration metric: took 7.75719ms to run NodePressure ...
	I1020 13:24:55.740351  499861 start.go:241] waiting for startup goroutines ...
	I1020 13:24:56.177302  499861 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-018730" context rescaled to 1 replicas
	I1020 13:24:56.177340  499861 start.go:246] waiting for cluster config update ...
	I1020 13:24:56.177353  499861 start.go:255] writing updated cluster config ...
	I1020 13:24:56.177708  499861 ssh_runner.go:195] Run: rm -f paused
	I1020 13:24:56.239842  499861 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:24:56.245425  499861 out.go:179] * Done! kubectl is now configured to use "newest-cni-018730" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.473454666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.494588927Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=69b4de23-b5b5-47d9-a7bc-9c8e82b1b122 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.495602044Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-cfrgk/POD" id=7cd7c07f-1bb5-43b6-945d-43e854f72c22 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.495658398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.507144121Z" level=info msg="Ran pod sandbox 6d28363b7dd8ab7a21ec574dbaef57e8da5cea7d685a4a8a68290ddd551abaee with infra container: kube-system/kindnet-znl5b/POD" id=69b4de23-b5b5-47d9-a7bc-9c8e82b1b122 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.514709432Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=eb552aa3-3408-4054-93d5-1b4ec129aa17 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.525508664Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7cd7c07f-1bb5-43b6-945d-43e854f72c22 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.5271227Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=01dc44db-3fc7-4458-bd86-1666938c45f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.54513724Z" level=info msg="Creating container: kube-system/kindnet-znl5b/kindnet-cni" id=e4a7adf1-1ab0-461a-aa60-885e0a7fb5aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.5452786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.579386687Z" level=info msg="Ran pod sandbox e37c63e576f0c4663c9f4de6c997e4266c39d11425adb934949bca3b4c323be8 with infra container: kube-system/kube-proxy-cfrgk/POD" id=7cd7c07f-1bb5-43b6-945d-43e854f72c22 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.597819374Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1473dad2-2f7c-4b0e-b801-2e99096cc6b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.607125114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.607959053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.608267455Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6836e903-ff8f-4afe-a5e6-ad7facdb2fc4 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.643994673Z" level=info msg="Creating container: kube-system/kube-proxy-cfrgk/kube-proxy" id=a9599be5-347d-4628-ac11-59f121a3d4a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.648624765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.705901544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.706512547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.761968355Z" level=info msg="Created container c4f80b3d2a6585c1427bb53a1a86d0bd90769fdf4ac4cd8a116d488d95099935: kube-system/kindnet-znl5b/kindnet-cni" id=e4a7adf1-1ab0-461a-aa60-885e0a7fb5aa name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.762991129Z" level=info msg="Starting container: c4f80b3d2a6585c1427bb53a1a86d0bd90769fdf4ac4cd8a116d488d95099935" id=d19e88dc-5dcc-4942-9cc5-51c2e5c75def name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.766117784Z" level=info msg="Started container" PID=1451 containerID=c4f80b3d2a6585c1427bb53a1a86d0bd90769fdf4ac4cd8a116d488d95099935 description=kube-system/kindnet-znl5b/kindnet-cni id=d19e88dc-5dcc-4942-9cc5-51c2e5c75def name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d28363b7dd8ab7a21ec574dbaef57e8da5cea7d685a4a8a68290ddd551abaee
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.812054508Z" level=info msg="Created container 689d0285f3830553377c392c66cad3e0276eabc8948042bfd780313f78d9c718: kube-system/kube-proxy-cfrgk/kube-proxy" id=a9599be5-347d-4628-ac11-59f121a3d4a6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.815507165Z" level=info msg="Starting container: 689d0285f3830553377c392c66cad3e0276eabc8948042bfd780313f78d9c718" id=96284540-9bf3-41d4-b8d4-8259f2edb9d5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:24:54 newest-cni-018730 crio[841]: time="2025-10-20T13:24:54.827620359Z" level=info msg="Started container" PID=1455 containerID=689d0285f3830553377c392c66cad3e0276eabc8948042bfd780313f78d9c718 description=kube-system/kube-proxy-cfrgk/kube-proxy id=96284540-9bf3-41d4-b8d4-8259f2edb9d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e37c63e576f0c4663c9f4de6c997e4266c39d11425adb934949bca3b4c323be8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	689d0285f3830       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   e37c63e576f0c       kube-proxy-cfrgk                            kube-system
	c4f80b3d2a658       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   6d28363b7dd8a       kindnet-znl5b                               kube-system
	33bec51031e57       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   0                   06f0f6dd02818       kube-controller-manager-newest-cni-018730   kube-system
	7a20188ed2e01       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   e6972c5c4edec       kube-scheduler-newest-cni-018730            kube-system
	539096906b962       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   937a16e8d1704       etcd-newest-cni-018730                      kube-system
	f357667a3fba9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   766b931a29c87       kube-apiserver-newest-cni-018730            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-018730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-018730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=newest-cni-018730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_24_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:24:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-018730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:24:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:24:49 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:24:49 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:24:49 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 20 Oct 2025 13:24:49 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-018730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                02857502-1595-48f5-a221-2258d77f161c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-018730                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-znl5b                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-018730             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-018730    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-cfrgk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-018730             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 2s    kube-proxy       
	  Normal   Starting                 8s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s    kubelet          Node newest-cni-018730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-018730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s    kubelet          Node newest-cni-018730 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-018730 event: Registered Node newest-cni-018730 in Controller
	
	
	==> dmesg <==
	[Oct20 13:02] overlayfs: idmapped layers are currently not supported
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	[Oct20 13:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [539096906b962d171832086b9e0a7567c25ef198a1aae2bffb91612812d3595e] <==
	{"level":"warn","ts":"2025-10-20T13:24:44.977945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.001706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.029948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.047853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.067558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.088250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.108355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.124723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.143586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.163454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.181715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.197825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.216931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.232164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.255035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.276528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.317217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.351605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.369595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.390996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.413599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.440320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.455673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.474178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:45.554656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42104","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:24:57 up  3:07,  0 user,  load average: 2.53, 2.61, 2.48
	Linux newest-cni-018730 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4f80b3d2a6585c1427bb53a1a86d0bd90769fdf4ac4cd8a116d488d95099935] <==
	I1020 13:24:54.871512       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:24:54.921870       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 13:24:54.921991       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:24:54.922011       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:24:54.922026       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:24:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:24:55.210995       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:24:55.211012       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:24:55.211024       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:24:55.211275       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [f357667a3fba9574ab11df33cf8985b53c8b7374b5abe9b8fc1a5f1391e1f61b] <==
	I1020 13:24:46.391700       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1020 13:24:46.393249       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1020 13:24:46.400713       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:24:46.414741       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:24:46.424205       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 13:24:46.439009       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:24:46.441250       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:24:46.596355       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:24:47.121503       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 13:24:47.126900       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 13:24:47.126999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:24:47.877520       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:24:47.944254       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:24:48.069854       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 13:24:48.079580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1020 13:24:48.080919       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:24:48.087209       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:24:48.279447       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:24:49.194212       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:24:49.219705       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 13:24:49.242855       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 13:24:53.286093       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:24:53.301608       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:24:54.130169       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1020 13:24:54.280770       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [33bec51031e5748ca80e7ecb7052964e4684d32696a4db89727519a09b0a31c7] <==
	I1020 13:24:53.288285       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 13:24:53.289039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:24:53.289105       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:24:53.289139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:24:53.292153       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:24:53.292185       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1020 13:24:53.292215       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1020 13:24:53.292240       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1020 13:24:53.292245       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1020 13:24:53.292253       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1020 13:24:53.307506       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:24:53.307702       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-018730" podCIDRs=["10.42.0.0/24"]
	I1020 13:24:53.323684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:24:53.325187       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1020 13:24:53.326312       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 13:24:53.326849       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:24:53.326939       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 13:24:53.328176       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:24:53.328207       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:24:53.328232       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:24:53.328408       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 13:24:53.329550       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 13:24:53.330800       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 13:24:53.332989       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 13:24:53.335277       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [689d0285f3830553377c392c66cad3e0276eabc8948042bfd780313f78d9c718] <==
	I1020 13:24:54.943837       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:24:55.080602       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:24:55.180728       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:24:55.180760       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 13:24:55.180829       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:24:55.236887       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:24:55.236944       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:24:55.244992       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:24:55.245272       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:24:55.245288       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:24:55.246323       1 config.go:200] "Starting service config controller"
	I1020 13:24:55.246333       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:24:55.256594       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:24:55.260495       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:24:55.260533       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:24:55.260538       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:24:55.259555       1 config.go:309] "Starting node config controller"
	I1020 13:24:55.260982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:24:55.260990       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:24:55.346421       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:24:55.360668       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:24:55.360710       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a20188ed2e01df5105488b5e4f9d7ddf9856510c904c288349be8ac1b2555dd] <==
	E1020 13:24:46.418287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 13:24:46.418323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 13:24:46.418360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 13:24:46.418634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 13:24:46.418680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 13:24:46.418716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 13:24:46.418750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 13:24:46.418786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 13:24:46.420489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 13:24:46.425035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1020 13:24:47.270216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 13:24:47.300823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 13:24:47.316718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1020 13:24:47.336842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 13:24:47.435704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 13:24:47.451283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 13:24:47.478558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 13:24:47.502342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 13:24:47.504330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 13:24:47.543756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 13:24:47.567051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 13:24:47.580571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 13:24:47.605073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 13:24:47.623672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1020 13:24:48.002422       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.271767    1310 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.271852    1310 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.272454    1310 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.273792    1310 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: E1020 13:24:50.296863    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-018730\" already exists" pod="kube-system/etcd-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: E1020 13:24:50.300555    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-018730\" already exists" pod="kube-system/kube-scheduler-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: E1020 13:24:50.311636    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-018730\" already exists" pod="kube-system/kube-apiserver-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: E1020 13:24:50.312348    1310 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-018730\" already exists" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.333581    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-018730" podStartSLOduration=1.333552943 podStartE2EDuration="1.333552943s" podCreationTimestamp="2025-10-20 13:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:24:50.318814229 +0000 UTC m=+1.307693118" watchObservedRunningTime="2025-10-20 13:24:50.333552943 +0000 UTC m=+1.322431832"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.350773    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-018730" podStartSLOduration=1.3507514729999999 podStartE2EDuration="1.350751473s" podCreationTimestamp="2025-10-20 13:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:24:50.334956556 +0000 UTC m=+1.323835437" watchObservedRunningTime="2025-10-20 13:24:50.350751473 +0000 UTC m=+1.339630345"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.370044    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-018730" podStartSLOduration=1.3700247700000001 podStartE2EDuration="1.37002477s" podCreationTimestamp="2025-10-20 13:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:24:50.351866917 +0000 UTC m=+1.340745806" watchObservedRunningTime="2025-10-20 13:24:50.37002477 +0000 UTC m=+1.358903651"
	Oct 20 13:24:50 newest-cni-018730 kubelet[1310]: I1020 13:24:50.394984    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-018730" podStartSLOduration=1.394964013 podStartE2EDuration="1.394964013s" podCreationTimestamp="2025-10-20 13:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:24:50.371190611 +0000 UTC m=+1.360069484" watchObservedRunningTime="2025-10-20 13:24:50.394964013 +0000 UTC m=+1.383842894"
	Oct 20 13:24:53 newest-cni-018730 kubelet[1310]: I1020 13:24:53.336460    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 20 13:24:53 newest-cni-018730 kubelet[1310]: I1020 13:24:53.337670    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182096    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-xtables-lock\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182323    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw87g\" (UniqueName: \"kubernetes.io/projected/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-kube-api-access-zw87g\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182466    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b049f68-632a-4288-9c43-da5c3d72e46f-xtables-lock\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182592    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2b049f68-632a-4288-9c43-da5c3d72e46f-kube-proxy\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182632    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b049f68-632a-4288-9c43-da5c3d72e46f-lib-modules\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182659    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-cni-cfg\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182678    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-lib-modules\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.182786    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8jt\" (UniqueName: \"kubernetes.io/projected/2b049f68-632a-4288-9c43-da5c3d72e46f-kube-api-access-qw8jt\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:24:54 newest-cni-018730 kubelet[1310]: I1020 13:24:54.313953    1310 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:24:55 newest-cni-018730 kubelet[1310]: I1020 13:24:55.353476    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cfrgk" podStartSLOduration=1.353459252 podStartE2EDuration="1.353459252s" podCreationTimestamp="2025-10-20 13:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:24:55.305783937 +0000 UTC m=+6.294662810" watchObservedRunningTime="2025-10-20 13:24:55.353459252 +0000 UTC m=+6.342338125"
	Oct 20 13:24:57 newest-cni-018730 kubelet[1310]: I1020 13:24:57.408276    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-znl5b" podStartSLOduration=3.408256892 podStartE2EDuration="3.408256892s" podCreationTimestamp="2025-10-20 13:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:24:55.377207793 +0000 UTC m=+6.366086674" watchObservedRunningTime="2025-10-20 13:24:57.408256892 +0000 UTC m=+8.397135773"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-018730 -n newest-cni-018730
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-018730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-sjxcr storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner: exit status 1 (79.642072ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-sjxcr" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-018730 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-018730 --alsologtostderr -v=1: exit status 80 (1.694355204s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-018730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:25:18.121669  505169 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:25:18.121879  505169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:18.121902  505169 out.go:374] Setting ErrFile to fd 2...
	I1020 13:25:18.121920  505169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:18.122206  505169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:25:18.122474  505169 out.go:368] Setting JSON to false
	I1020 13:25:18.122515  505169 mustload.go:65] Loading cluster: newest-cni-018730
	I1020 13:25:18.122924  505169 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:18.123505  505169 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:18.149983  505169 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:18.150303  505169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:18.207696  505169 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-20 13:25:18.195718469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:18.208858  505169 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-018730 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 13:25:18.212574  505169 out.go:179] * Pausing node newest-cni-018730 ... 
	I1020 13:25:18.216049  505169 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:18.216443  505169 ssh_runner.go:195] Run: systemctl --version
	I1020 13:25:18.216492  505169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:18.236988  505169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:18.343484  505169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:25:18.360918  505169 pause.go:52] kubelet running: true
	I1020 13:25:18.361022  505169 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:25:18.592086  505169 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:25:18.592201  505169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:25:18.666110  505169 cri.go:89] found id: "58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785"
	I1020 13:25:18.666131  505169 cri.go:89] found id: "8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea"
	I1020 13:25:18.666136  505169 cri.go:89] found id: "143b6525a7ad6f934e45380f7976365e8a6ae6ec4b65dc75abf1434765d1f818"
	I1020 13:25:18.666139  505169 cri.go:89] found id: "77cd44a58f82396ce834a1fa844454d78fc45a286cf2263f2d5b938131f28f2a"
	I1020 13:25:18.666143  505169 cri.go:89] found id: "6829ea5db474ed23b6f22c417c519ff9551292248dec661b2aef9bb5a0d11186"
	I1020 13:25:18.666146  505169 cri.go:89] found id: "ffe7075b79fb761eb811ac31f88b276f449cc72f039fc13d368ca9f41e9b8932"
	I1020 13:25:18.666149  505169 cri.go:89] found id: ""
	I1020 13:25:18.666232  505169 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:25:18.677219  505169 retry.go:31] will retry after 278.015872ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:25:18Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:25:18.955673  505169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:25:18.968805  505169 pause.go:52] kubelet running: false
	I1020 13:25:18.968879  505169 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:25:19.116069  505169 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:25:19.116239  505169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:25:19.196448  505169 cri.go:89] found id: "58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785"
	I1020 13:25:19.196479  505169 cri.go:89] found id: "8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea"
	I1020 13:25:19.196484  505169 cri.go:89] found id: "143b6525a7ad6f934e45380f7976365e8a6ae6ec4b65dc75abf1434765d1f818"
	I1020 13:25:19.196488  505169 cri.go:89] found id: "77cd44a58f82396ce834a1fa844454d78fc45a286cf2263f2d5b938131f28f2a"
	I1020 13:25:19.196492  505169 cri.go:89] found id: "6829ea5db474ed23b6f22c417c519ff9551292248dec661b2aef9bb5a0d11186"
	I1020 13:25:19.196496  505169 cri.go:89] found id: "ffe7075b79fb761eb811ac31f88b276f449cc72f039fc13d368ca9f41e9b8932"
	I1020 13:25:19.196499  505169 cri.go:89] found id: ""
	I1020 13:25:19.196565  505169 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:25:19.212882  505169 retry.go:31] will retry after 256.939499ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:25:19Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:25:19.470376  505169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:25:19.483607  505169 pause.go:52] kubelet running: false
	I1020 13:25:19.483716  505169 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:25:19.632101  505169 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:25:19.632198  505169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:25:19.714640  505169 cri.go:89] found id: "58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785"
	I1020 13:25:19.714665  505169 cri.go:89] found id: "8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea"
	I1020 13:25:19.714670  505169 cri.go:89] found id: "143b6525a7ad6f934e45380f7976365e8a6ae6ec4b65dc75abf1434765d1f818"
	I1020 13:25:19.714674  505169 cri.go:89] found id: "77cd44a58f82396ce834a1fa844454d78fc45a286cf2263f2d5b938131f28f2a"
	I1020 13:25:19.714677  505169 cri.go:89] found id: "6829ea5db474ed23b6f22c417c519ff9551292248dec661b2aef9bb5a0d11186"
	I1020 13:25:19.714681  505169 cri.go:89] found id: "ffe7075b79fb761eb811ac31f88b276f449cc72f039fc13d368ca9f41e9b8932"
	I1020 13:25:19.714684  505169 cri.go:89] found id: ""
	I1020 13:25:19.714751  505169 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:25:19.729500  505169 out.go:203] 
	W1020 13:25:19.732359  505169 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:25:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 13:25:19.732425  505169 out.go:285] * 
	* 
	W1020 13:25:19.739623  505169 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 13:25:19.744660  505169 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-018730 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-018730
helpers_test.go:243: (dbg) docker inspect newest-cni-018730:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7",
	        "Created": "2025-10-20T13:24:20.016324566Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:25:00.94820978Z",
	            "FinishedAt": "2025-10-20T13:24:59.619917761Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/hosts",
	        "LogPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7-json.log",
	        "Name": "/newest-cni-018730",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-018730:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-018730",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7",
	                "LowerDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-018730",
	                "Source": "/var/lib/docker/volumes/newest-cni-018730/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-018730",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-018730",
	                "name.minikube.sigs.k8s.io": "newest-cni-018730",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b8e70f3bfefc18c4277c0fdf75bac6a486c3766976dac509015bad458deb3fa9",
	            "SandboxKey": "/var/run/docker/netns/b8e70f3bfefc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-018730": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:4b:a6:c2:88:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f69e85737164e79f4c4847958c72fc64125c4b0702605f11df6e4b774d799d40",
	                    "EndpointID": "db28584e4e6104915ed453c89a52203f8878a783e2392a2da6da1962973740f5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-018730",
	                        "b3c52ddf59c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730: exit status 2 (441.374591ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-018730 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-018730 logs -n 25: (1.148352921s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ stop    │ -p newest-cni-018730 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-018730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ newest-cni-018730 image list --format=json                                                                                                                                                                                                    │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ pause   │ -p newest-cni-018730 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:25:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:25:00.607461  503305 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:25:00.607684  503305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:00.607696  503305 out.go:374] Setting ErrFile to fd 2...
	I1020 13:25:00.607702  503305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:00.607994  503305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:25:00.608523  503305 out.go:368] Setting JSON to false
	I1020 13:25:00.609738  503305 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11251,"bootTime":1760955450,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:25:00.611615  503305 start.go:141] virtualization:  
	I1020 13:25:00.614886  503305 out.go:179] * [newest-cni-018730] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:25:00.619089  503305 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:25:00.619092  503305 notify.go:220] Checking for updates...
	I1020 13:25:00.622405  503305 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:25:00.626217  503305 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:00.629514  503305 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:25:00.632572  503305 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:25:00.635683  503305 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:25:00.639397  503305 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:00.640272  503305 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:25:00.679991  503305 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:25:00.680170  503305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:00.755317  503305 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:25:00.745520644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:00.755465  503305 docker.go:318] overlay module found
	I1020 13:25:00.760712  503305 out.go:179] * Using the docker driver based on existing profile
	I1020 13:25:00.763684  503305 start.go:305] selected driver: docker
	I1020 13:25:00.763713  503305 start.go:925] validating driver "docker" against &{Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:00.763833  503305 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:25:00.764935  503305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:00.854931  503305 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:25:00.844985076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:00.855535  503305 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 13:25:00.855611  503305 cni.go:84] Creating CNI manager for ""
	I1020 13:25:00.855702  503305 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:25:00.855839  503305 start.go:349] cluster config:
	{Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:00.861004  503305 out.go:179] * Starting "newest-cni-018730" primary control-plane node in "newest-cni-018730" cluster
	I1020 13:25:00.863881  503305 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:25:00.866871  503305 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:25:00.869889  503305 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:25:00.869854  503305 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:00.870032  503305 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:25:00.870045  503305 cache.go:58] Caching tarball of preloaded images
	I1020 13:25:00.870125  503305 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:25:00.870135  503305 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:25:00.870246  503305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/config.json ...
	I1020 13:25:00.890694  503305 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:25:00.890775  503305 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:25:00.890829  503305 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:25:00.890869  503305 start.go:360] acquireMachinesLock for newest-cni-018730: {Name:mke4ea61e223de4e71dff13c842eb038a598c816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:00.891004  503305 start.go:364] duration metric: took 78.458µs to acquireMachinesLock for "newest-cni-018730"
	I1020 13:25:00.891029  503305 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:25:00.891035  503305 fix.go:54] fixHost starting: 
	I1020 13:25:00.891475  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:00.910819  503305 fix.go:112] recreateIfNeeded on newest-cni-018730: state=Stopped err=<nil>
	W1020 13:25:00.910846  503305 fix.go:138] unexpected machine state, will restart: <nil>
	W1020 13:24:59.857035  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:01.857244  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:25:00.914205  503305 out.go:252] * Restarting existing docker container for "newest-cni-018730" ...
	I1020 13:25:00.914296  503305 cli_runner.go:164] Run: docker start newest-cni-018730
	I1020 13:25:01.210684  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:01.233786  503305 kic.go:430] container "newest-cni-018730" state is running.
	I1020 13:25:01.234195  503305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:25:01.257700  503305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/config.json ...
	I1020 13:25:01.257946  503305 machine.go:93] provisionDockerMachine start ...
	I1020 13:25:01.258013  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:01.279352  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:01.279897  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:01.279911  503305 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:25:01.281035  503305 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:25:04.432286  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-018730
	
	I1020 13:25:04.432318  503305 ubuntu.go:182] provisioning hostname "newest-cni-018730"
	I1020 13:25:04.432416  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:04.450360  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:04.450682  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:04.450698  503305 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-018730 && echo "newest-cni-018730" | sudo tee /etc/hostname
	I1020 13:25:04.614694  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-018730
	
	I1020 13:25:04.614778  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:04.633067  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:04.633385  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:04.633408  503305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-018730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-018730/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-018730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:25:04.784665  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:25:04.784755  503305 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:25:04.784788  503305 ubuntu.go:190] setting up certificates
	I1020 13:25:04.784798  503305 provision.go:84] configureAuth start
	I1020 13:25:04.784859  503305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:25:04.802697  503305 provision.go:143] copyHostCerts
	I1020 13:25:04.802768  503305 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:25:04.802814  503305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:25:04.802916  503305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:25:04.803025  503305 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:25:04.803033  503305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:25:04.803059  503305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:25:04.803116  503305 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:25:04.803121  503305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:25:04.803143  503305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:25:04.803189  503305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.newest-cni-018730 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-018730]
	I1020 13:25:05.176966  503305 provision.go:177] copyRemoteCerts
	I1020 13:25:05.177041  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:25:05.177093  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.198207  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:05.304142  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:25:05.322719  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:25:05.342992  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:25:05.363992  503305 provision.go:87] duration metric: took 579.180021ms to configureAuth
	I1020 13:25:05.364020  503305 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:25:05.364218  503305 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:05.364319  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.381017  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:05.381327  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:05.381346  503305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:25:05.677588  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:25:05.677619  503305 machine.go:96] duration metric: took 4.419662652s to provisionDockerMachine
	I1020 13:25:05.677631  503305 start.go:293] postStartSetup for "newest-cni-018730" (driver="docker")
	I1020 13:25:05.677660  503305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:25:05.677726  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:25:05.677773  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.694937  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:05.804600  503305 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:25:05.808448  503305 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:25:05.808487  503305 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:25:05.808516  503305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:25:05.808596  503305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:25:05.808716  503305 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:25:05.808841  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:25:05.816681  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:25:05.835830  503305 start.go:296] duration metric: took 158.182209ms for postStartSetup
	I1020 13:25:05.835913  503305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:25:05.835964  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.852951  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:05.958342  503305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:25:05.963082  503305 fix.go:56] duration metric: took 5.072040491s for fixHost
	I1020 13:25:05.963113  503305 start.go:83] releasing machines lock for "newest-cni-018730", held for 5.072097026s
	I1020 13:25:05.963180  503305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:25:05.979934  503305 ssh_runner.go:195] Run: cat /version.json
	I1020 13:25:05.979958  503305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:25:05.979998  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.980010  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.997948  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:06.016025  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:06.104614  503305 ssh_runner.go:195] Run: systemctl --version
	I1020 13:25:06.199037  503305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:25:06.238844  503305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:25:06.243275  503305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:25:06.243355  503305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:25:06.251470  503305 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:25:06.251506  503305 start.go:495] detecting cgroup driver to use...
	I1020 13:25:06.251556  503305 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:25:06.251637  503305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:25:06.266898  503305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:25:06.279826  503305 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:25:06.279891  503305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:25:06.295784  503305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:25:06.308894  503305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:25:06.435531  503305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:25:06.560528  503305 docker.go:234] disabling docker service ...
	I1020 13:25:06.560644  503305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:25:06.577733  503305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:25:06.591204  503305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:25:06.712633  503305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:25:06.836146  503305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:25:06.851479  503305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:25:06.871697  503305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:25:06.871768  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.881724  503305 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:25:06.881805  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.897272  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.906783  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.916389  503305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:25:06.925237  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.934468  503305 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.944498  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.953412  503305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:25:06.961169  503305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:25:06.969238  503305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:25:07.090835  503305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:25:07.220606  503305 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:25:07.220690  503305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:25:07.224831  503305 start.go:563] Will wait 60s for crictl version
	I1020 13:25:07.224905  503305 ssh_runner.go:195] Run: which crictl
	I1020 13:25:07.228470  503305 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:25:07.255227  503305 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:25:07.255378  503305 ssh_runner.go:195] Run: crio --version
	I1020 13:25:07.284944  503305 ssh_runner.go:195] Run: crio --version
	I1020 13:25:07.317945  503305 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:25:07.320765  503305 cli_runner.go:164] Run: docker network inspect newest-cni-018730 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:25:07.335081  503305 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 13:25:07.338893  503305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:25:07.351541  503305 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1020 13:25:07.354463  503305 kubeadm.go:883] updating cluster {Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:25:07.354609  503305 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:07.354699  503305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:25:07.390615  503305 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:25:07.390639  503305 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:25:07.390700  503305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:25:07.419365  503305 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:25:07.419387  503305 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:25:07.419398  503305 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 13:25:07.419506  503305 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-018730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:25:07.419599  503305 ssh_runner.go:195] Run: crio config
	I1020 13:25:07.493035  503305 cni.go:84] Creating CNI manager for ""
	I1020 13:25:07.493059  503305 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:25:07.493112  503305 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1020 13:25:07.493145  503305 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-018730 NodeName:newest-cni-018730 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:25:07.493286  503305 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-018730"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:25:07.493359  503305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:25:07.501807  503305 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:25:07.501876  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:25:07.509955  503305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 13:25:07.523792  503305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:25:07.536820  503305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1020 13:25:07.549820  503305 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:25:07.554619  503305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:25:07.564881  503305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:25:07.690115  503305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:25:07.706306  503305 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730 for IP: 192.168.85.2
	I1020 13:25:07.706329  503305 certs.go:195] generating shared ca certs ...
	I1020 13:25:07.706344  503305 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:07.706488  503305 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:25:07.706538  503305 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:25:07.706548  503305 certs.go:257] generating profile certs ...
	I1020 13:25:07.706629  503305 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.key
	I1020 13:25:07.706695  503305 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key.b19b56d0
	I1020 13:25:07.706737  503305 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key
	I1020 13:25:07.706844  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:25:07.706878  503305 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:25:07.706893  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:25:07.706923  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:25:07.706955  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:25:07.706981  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:25:07.707024  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:25:07.708149  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:25:07.732641  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:25:07.750867  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:25:07.776526  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:25:07.797968  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 13:25:07.817498  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:25:07.842348  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:25:07.867160  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 13:25:07.898838  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:25:07.921904  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:25:07.941580  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:25:07.960804  503305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:25:07.977225  503305 ssh_runner.go:195] Run: openssl version
	I1020 13:25:07.983737  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:25:07.993219  503305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:25:07.999008  503305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:25:07.999120  503305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:25:08.054164  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:25:08.062683  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:25:08.073083  503305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:25:08.077213  503305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:25:08.077311  503305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:25:08.119123  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:25:08.127227  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:25:08.136003  503305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:25:08.139995  503305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:25:08.140062  503305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:25:08.182192  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:25:08.190566  503305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:25:08.194530  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:25:08.236001  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:25:08.277668  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:25:08.320909  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:25:08.369803  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:25:08.426396  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:25:08.510302  503305 kubeadm.go:400] StartCluster: {Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:08.510453  503305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:25:08.510565  503305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:25:08.580496  503305 cri.go:89] found id: "143b6525a7ad6f934e45380f7976365e8a6ae6ec4b65dc75abf1434765d1f818"
	I1020 13:25:08.580579  503305 cri.go:89] found id: "77cd44a58f82396ce834a1fa844454d78fc45a286cf2263f2d5b938131f28f2a"
	I1020 13:25:08.580602  503305 cri.go:89] found id: "6829ea5db474ed23b6f22c417c519ff9551292248dec661b2aef9bb5a0d11186"
	I1020 13:25:08.580640  503305 cri.go:89] found id: "ffe7075b79fb761eb811ac31f88b276f449cc72f039fc13d368ca9f41e9b8932"
	I1020 13:25:08.580662  503305 cri.go:89] found id: ""
	I1020 13:25:08.580751  503305 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:25:08.597773  503305 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:25:08Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:25:08.597902  503305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:25:08.612251  503305 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:25:08.612325  503305 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:25:08.612453  503305 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:25:08.624697  503305 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:25:08.625407  503305 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-018730" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:08.625751  503305 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-018730" cluster setting kubeconfig missing "newest-cni-018730" context setting]
	I1020 13:25:08.626313  503305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:08.628093  503305 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:25:08.641642  503305 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 13:25:08.641725  503305 kubeadm.go:601] duration metric: took 29.369719ms to restartPrimaryControlPlane
	I1020 13:25:08.641748  503305 kubeadm.go:402] duration metric: took 131.456374ms to StartCluster
	I1020 13:25:08.641803  503305 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:08.641895  503305 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:08.642953  503305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:08.643238  503305 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:25:08.643744  503305 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:08.643765  503305 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:25:08.643844  503305 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-018730"
	I1020 13:25:08.643865  503305 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-018730"
	W1020 13:25:08.643877  503305 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:25:08.643888  503305 addons.go:69] Setting dashboard=true in profile "newest-cni-018730"
	I1020 13:25:08.643985  503305 addons.go:238] Setting addon dashboard=true in "newest-cni-018730"
	W1020 13:25:08.644009  503305 addons.go:247] addon dashboard should already be in state true
	I1020 13:25:08.644043  503305 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:08.644672  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.643904  503305 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:08.643912  503305 addons.go:69] Setting default-storageclass=true in profile "newest-cni-018730"
	I1020 13:25:08.645400  503305 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-018730"
	I1020 13:25:08.645518  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.645683  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.651190  503305 out.go:179] * Verifying Kubernetes components...
	I1020 13:25:08.659203  503305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:25:08.713935  503305 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:25:08.714085  503305 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:25:08.716972  503305 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:25:08.716995  503305 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:25:08.717061  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:08.719627  503305 addons.go:238] Setting addon default-storageclass=true in "newest-cni-018730"
	W1020 13:25:08.719654  503305 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:25:08.719680  503305 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:08.720089  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.720311  503305 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1020 13:25:04.357222  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:06.856642  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:08.857074  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:25:08.723240  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:25:08.723269  503305 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:25:08.723331  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:08.758251  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:08.780574  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:08.785897  503305 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:25:08.785925  503305 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:25:08.785991  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:08.809653  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:09.013364  503305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:25:09.069659  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:25:09.069733  503305 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:25:09.072489  503305 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:25:09.072610  503305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:25:09.076954  503305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:25:09.114891  503305 api_server.go:72] duration metric: took 471.590944ms to wait for apiserver process to appear ...
	I1020 13:25:09.114964  503305 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:25:09.115010  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:09.133357  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:25:09.133433  503305 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:25:09.192028  503305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:25:09.219008  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:25:09.219091  503305 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:25:09.286893  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:25:09.286963  503305 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:25:09.367613  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:25:09.367691  503305 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:25:09.418665  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:25:09.418740  503305 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:25:09.434415  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:25:09.434489  503305 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:25:09.449210  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:25:09.449289  503305 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:25:09.469368  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:25:09.469437  503305 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:25:09.485269  503305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1020 13:25:10.857187  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:13.356882  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:25:14.117288  503305 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 13:25:14.117381  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:14.873027  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 13:25:14.873104  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 13:25:14.873134  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:14.967771  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 13:25:14.967851  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 13:25:15.116075  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:15.288105  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:15.288190  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:15.615904  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:15.657502  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:15.657583  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:16.116098  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:16.146221  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:16.146298  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:16.615103  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:16.680014  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:16.680110  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:16.696563  503305 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.619537247s)
	I1020 13:25:16.696689  503305 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.504589664s)
	I1020 13:25:16.804705  503305 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.319308208s)
	I1020 13:25:16.807951  503305 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-018730 addons enable metrics-server
	
	I1020 13:25:16.810872  503305 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1020 13:25:16.813654  503305 addons.go:514] duration metric: took 8.169879499s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1020 13:25:17.115118  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:17.125244  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:25:17.126354  503305 api_server.go:141] control plane version: v1.34.1
	I1020 13:25:17.126404  503305 api_server.go:131] duration metric: took 8.011406401s to wait for apiserver health ...
	I1020 13:25:17.126428  503305 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:25:17.130046  503305 system_pods.go:59] 8 kube-system pods found
	I1020 13:25:17.130129  503305 system_pods.go:61] "coredns-66bc5c9577-sjxcr" [8a89d2c7-108a-4c4b-9f12-c918862fa04a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 13:25:17.130161  503305 system_pods.go:61] "etcd-newest-cni-018730" [19f3b3ce-69b4-4765-b079-797644b0d529] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:25:17.130203  503305 system_pods.go:61] "kindnet-znl5b" [5a6ca5b2-16cc-4b03-a59f-b1867665c8c8] Running
	I1020 13:25:17.130233  503305 system_pods.go:61] "kube-apiserver-newest-cni-018730" [ba8650fa-d31b-4e19-b6ed-806262274ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:25:17.130259  503305 system_pods.go:61] "kube-controller-manager-newest-cni-018730" [f4e1cbc5-da99-4dcc-8fa1-465324888375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:25:17.130295  503305 system_pods.go:61] "kube-proxy-cfrgk" [2b049f68-632a-4288-9c43-da5c3d72e46f] Running
	I1020 13:25:17.130323  503305 system_pods.go:61] "kube-scheduler-newest-cni-018730" [0b6f56c6-4faa-479c-b013-20b4ea5e1c5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:25:17.130348  503305 system_pods.go:61] "storage-provisioner" [81054ccb-6cd7-47f4-8244-a5a5df3d6ca1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 13:25:17.130388  503305 system_pods.go:74] duration metric: took 3.94004ms to wait for pod list to return data ...
	I1020 13:25:17.130416  503305 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:25:17.137175  503305 default_sa.go:45] found service account: "default"
	I1020 13:25:17.137252  503305 default_sa.go:55] duration metric: took 6.80123ms for default service account to be created ...
	I1020 13:25:17.137280  503305 kubeadm.go:586] duration metric: took 8.493984206s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 13:25:17.137327  503305 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:25:17.142609  503305 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:25:17.142687  503305 node_conditions.go:123] node cpu capacity is 2
	I1020 13:25:17.142714  503305 node_conditions.go:105] duration metric: took 5.362532ms to run NodePressure ...
	I1020 13:25:17.142739  503305 start.go:241] waiting for startup goroutines ...
	I1020 13:25:17.142774  503305 start.go:246] waiting for cluster config update ...
	I1020 13:25:17.142804  503305 start.go:255] writing updated cluster config ...
	I1020 13:25:17.143163  503305 ssh_runner.go:195] Run: rm -f paused
	I1020 13:25:17.239646  503305 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:25:17.242883  503305 out.go:179] * Done! kubectl is now configured to use "newest-cni-018730" cluster and "default" namespace by default
	W1020 13:25:15.856640  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:18.355894  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.14252315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.153985915Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=caa14aae-e220-4afb-83c7-717740321959 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.163037957Z" level=info msg="Ran pod sandbox 1074c3f4c1aeb77198cd094983904e1e90cb5846d651df5194d51481a0d14866 with infra container: kube-system/kube-proxy-cfrgk/POD" id=caa14aae-e220-4afb-83c7-717740321959 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.169251447Z" level=info msg="Running pod sandbox: kube-system/kindnet-znl5b/POD" id=18819081-9090-4db0-93a8-6204b8c95b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.169313323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.176658758Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=18819081-9090-4db0-93a8-6204b8c95b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.195374674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=72cfb100-3471-4fb7-913a-b511c5a09559 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.205288627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=58f5a987-5baa-4004-a5d0-31a24b809ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.211208894Z" level=info msg="Ran pod sandbox 8cc8ffbaba43935a3b19722c0f1fe8549ac195b6cd2fb35497b251936765723d with infra container: kube-system/kindnet-znl5b/POD" id=18819081-9090-4db0-93a8-6204b8c95b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.215888612Z" level=info msg="Creating container: kube-system/kube-proxy-cfrgk/kube-proxy" id=fdf80c34-b324-4f75-84f8-b9eef6669c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.215991292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.224857568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.232434948Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d15442c2-4a2c-4542-85b6-47def1cf5586 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.232896017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.2440902Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f7e9b032-8a65-4908-b4e9-6907330f890f name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.246223874Z" level=info msg="Creating container: kube-system/kindnet-znl5b/kindnet-cni" id=ffa9b90f-41ef-422c-a27f-7484fee16303 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.246471721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.259816465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.26058134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.292570573Z" level=info msg="Created container 58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785: kube-system/kindnet-znl5b/kindnet-cni" id=ffa9b90f-41ef-422c-a27f-7484fee16303 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.293181108Z" level=info msg="Starting container: 58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785" id=7cfce843-ac64-402f-b9d6-0d7ec3c4cbbe name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.294788662Z" level=info msg="Started container" PID=1063 containerID=58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785 description=kube-system/kindnet-znl5b/kindnet-cni id=7cfce843-ac64-402f-b9d6-0d7ec3c4cbbe name=/runtime.v1.RuntimeService/StartContainer sandboxID=8cc8ffbaba43935a3b19722c0f1fe8549ac195b6cd2fb35497b251936765723d
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.360913084Z" level=info msg="Created container 8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea: kube-system/kube-proxy-cfrgk/kube-proxy" id=fdf80c34-b324-4f75-84f8-b9eef6669c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.364577371Z" level=info msg="Starting container: 8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea" id=3e01f8f9-0a58-4a1c-a2cb-e19418d24049 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.371993863Z" level=info msg="Started container" PID=1059 containerID=8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea description=kube-system/kube-proxy-cfrgk/kube-proxy id=3e01f8f9-0a58-4a1c-a2cb-e19418d24049 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1074c3f4c1aeb77198cd094983904e1e90cb5846d651df5194d51481a0d14866
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	58257a846c41e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   8cc8ffbaba439       kindnet-znl5b                               kube-system
	8b905eb177a0f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   1074c3f4c1aeb       kube-proxy-cfrgk                            kube-system
	143b6525a7ad6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   bcdea74c09f21       kube-apiserver-newest-cni-018730            kube-system
	77cd44a58f823       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   971a9e1d9b991       kube-controller-manager-newest-cni-018730   kube-system
	6829ea5db474e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   1f862917273a9       kube-scheduler-newest-cni-018730            kube-system
	ffe7075b79fb7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   8e953928f9de5       etcd-newest-cni-018730                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-018730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-018730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=newest-cni-018730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_24_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:24:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-018730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-018730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                02857502-1595-48f5-a221-2258d77f161c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-018730                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-znl5b                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-018730             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-018730    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-cfrgk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-018730             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-018730 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-018730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-018730 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-018730 event: Registered Node newest-cni-018730 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x9 over 14s)  kubelet          Node newest-cni-018730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-018730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x7 over 14s)  kubelet          Node newest-cni-018730 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-018730 event: Registered Node newest-cni-018730 in Controller
	
	
	==> dmesg <==
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	[Oct20 13:24] overlayfs: idmapped layers are currently not supported
	[Oct20 13:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ffe7075b79fb761eb811ac31f88b276f449cc72f039fc13d368ca9f41e9b8932] <==
	{"level":"warn","ts":"2025-10-20T13:25:12.603613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.673571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.673989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.708860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.729032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.748346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.768661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.800110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.823947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.840963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.893511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.914926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.935947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.966595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.986575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.055482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.062755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.095842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.119065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.156230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.177286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.205174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.231317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.270593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.408393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41194","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:25:21 up  3:07,  0 user,  load average: 3.91, 2.96, 2.60
	Linux newest-cni-018730 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785] <==
	I1020 13:25:16.403148       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:25:16.403436       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 13:25:16.403600       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:25:16.403612       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:25:16.403622       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:25:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:25:16.612657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:25:16.612735       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:25:16.612767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:25:16.618580       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [143b6525a7ad6f934e45380f7976365e8a6ae6ec4b65dc75abf1434765d1f818] <==
	I1020 13:25:15.069699       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:25:15.069739       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:25:15.094907       1 aggregator.go:171] initial CRD sync complete...
	I1020 13:25:15.094945       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 13:25:15.094954       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:25:15.094961       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:25:15.104522       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:25:15.107229       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:25:15.107267       1 policy_source.go:240] refreshing policies
	I1020 13:25:15.111320       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 13:25:15.111348       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 13:25:15.137788       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1020 13:25:15.322272       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:25:15.783797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:25:16.041495       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:25:16.257585       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:25:16.418453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:25:16.525902       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:25:16.578161       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:25:16.773665       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.229.87"}
	I1020 13:25:16.798168       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.201.146"}
	I1020 13:25:19.771840       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:25:19.813015       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:25:19.884904       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 13:25:19.960562       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [77cd44a58f82396ce834a1fa844454d78fc45a286cf2263f2d5b938131f28f2a] <==
	I1020 13:25:19.306410       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:25:19.308641       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 13:25:19.310351       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 13:25:19.313747       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 13:25:19.318870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:25:19.318981       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:25:19.319060       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-018730"
	I1020 13:25:19.319106       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 13:25:19.321164       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:25:19.326205       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:25:19.326746       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:25:19.332406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:25:19.332436       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:25:19.332444       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:25:19.332529       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:25:19.333950       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 13:25:19.334182       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:25:19.347774       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:25:19.350436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:25:19.358428       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 13:25:19.355797       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:25:19.355935       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:25:19.356000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:25:19.353878       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 13:25:19.361207       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	
	
	==> kube-proxy [8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea] <==
	I1020 13:25:16.791761       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:25:16.953452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:25:17.056485       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:25:17.056619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 13:25:17.056732       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:25:17.108863       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:25:17.109017       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:25:17.115005       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:25:17.115479       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:25:17.123704       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:25:17.131453       1 config.go:200] "Starting service config controller"
	I1020 13:25:17.131482       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:25:17.131497       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:25:17.131501       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:25:17.131509       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:25:17.131515       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:25:17.132136       1 config.go:309] "Starting node config controller"
	I1020 13:25:17.132158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:25:17.132164       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:25:17.231875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:25:17.231982       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:25:17.232022       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6829ea5db474ed23b6f22c417c519ff9551292248dec661b2aef9bb5a0d11186] <==
	I1020 13:25:14.655048       1 serving.go:386] Generated self-signed cert in-memory
	I1020 13:25:17.393171       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:25:17.393311       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:25:17.399113       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:25:17.399790       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 13:25:17.399849       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 13:25:17.399909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:25:17.413290       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:25:17.413327       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:25:17.413368       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:25:17.413376       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:25:17.500484       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 13:25:17.513715       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:25:17.513831       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.235371     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.338784     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.338927     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.338973     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.340475     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.362789     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-018730\" already exists" pod="kube-system/etcd-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.362822     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.387393     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-018730\" already exists" pod="kube-system/kube-apiserver-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.387448     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.433055     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-018730\" already exists" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.433114     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.457571     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-018730\" already exists" pod="kube-system/kube-scheduler-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.553475     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.593572     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-018730\" already exists" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.830333     728 apiserver.go:52] "Watching apiserver"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.938542     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015103     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-xtables-lock\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015159     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b049f68-632a-4288-9c43-da5c3d72e46f-lib-modules\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015179     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-cni-cfg\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015203     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-lib-modules\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015245     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b049f68-632a-4288-9c43-da5c3d72e46f-xtables-lock\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.080295     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:25:18 newest-cni-018730 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:25:18 newest-cni-018730 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:25:18 newest-cni-018730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-018730 -n newest-cni-018730
E1020 13:25:21.740003  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-018730 -n newest-cni-018730: exit status 2 (393.664058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-018730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj: exit status 1 (85.819323ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-sjxcr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-qhv5v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w2lzj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-018730
helpers_test.go:243: (dbg) docker inspect newest-cni-018730:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7",
	        "Created": "2025-10-20T13:24:20.016324566Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:25:00.94820978Z",
	            "FinishedAt": "2025-10-20T13:24:59.619917761Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/hosts",
	        "LogPath": "/var/lib/docker/containers/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7/b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7-json.log",
	        "Name": "/newest-cni-018730",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-018730:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-018730",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b3c52ddf59c019ca48540663c9223cdf7890a11c34f9c402d40ac21c37c65da7",
	                "LowerDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d60939879b7cd16f49ff4a57e54f05af592e3085431a84d061bd9d5573c22e73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-018730",
	                "Source": "/var/lib/docker/volumes/newest-cni-018730/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-018730",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-018730",
	                "name.minikube.sigs.k8s.io": "newest-cni-018730",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b8e70f3bfefc18c4277c0fdf75bac6a486c3766976dac509015bad458deb3fa9",
	            "SandboxKey": "/var/run/docker/netns/b8e70f3bfefc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-018730": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:4b:a6:c2:88:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f69e85737164e79f4c4847958c72fc64125c4b0702605f11df6e4b774d799d40",
	                    "EndpointID": "db28584e4e6104915ed453c89a52203f8878a783e2392a2da6da1962973740f5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-018730",
	                        "b3c52ddf59c0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730: exit status 2 (368.679686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-018730 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-018730 logs -n 25: (1.125983938s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-794175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-794175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ stop    │ -p newest-cni-018730 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-018730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ newest-cni-018730 image list --format=json                                                                                                                                                                                                    │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ pause   │ -p newest-cni-018730 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:25:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:25:00.607461  503305 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:25:00.607684  503305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:00.607696  503305 out.go:374] Setting ErrFile to fd 2...
	I1020 13:25:00.607702  503305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:00.607994  503305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:25:00.608523  503305 out.go:368] Setting JSON to false
	I1020 13:25:00.609738  503305 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11251,"bootTime":1760955450,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:25:00.611615  503305 start.go:141] virtualization:  
	I1020 13:25:00.614886  503305 out.go:179] * [newest-cni-018730] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:25:00.619089  503305 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:25:00.619092  503305 notify.go:220] Checking for updates...
	I1020 13:25:00.622405  503305 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:25:00.626217  503305 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:00.629514  503305 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:25:00.632572  503305 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:25:00.635683  503305 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:25:00.639397  503305 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:00.640272  503305 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:25:00.679991  503305 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:25:00.680170  503305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:00.755317  503305 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:25:00.745520644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:00.755465  503305 docker.go:318] overlay module found
	I1020 13:25:00.760712  503305 out.go:179] * Using the docker driver based on existing profile
	I1020 13:25:00.763684  503305 start.go:305] selected driver: docker
	I1020 13:25:00.763713  503305 start.go:925] validating driver "docker" against &{Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:00.763833  503305 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:25:00.764935  503305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:00.854931  503305 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:25:00.844985076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:00.855535  503305 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 13:25:00.855611  503305 cni.go:84] Creating CNI manager for ""
	I1020 13:25:00.855702  503305 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:25:00.855839  503305 start.go:349] cluster config:
	{Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:00.861004  503305 out.go:179] * Starting "newest-cni-018730" primary control-plane node in "newest-cni-018730" cluster
	I1020 13:25:00.863881  503305 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:25:00.866871  503305 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:25:00.869889  503305 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:25:00.869854  503305 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:00.870032  503305 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:25:00.870045  503305 cache.go:58] Caching tarball of preloaded images
	I1020 13:25:00.870125  503305 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:25:00.870135  503305 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:25:00.870246  503305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/config.json ...
	I1020 13:25:00.890694  503305 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:25:00.890775  503305 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:25:00.890829  503305 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:25:00.890869  503305 start.go:360] acquireMachinesLock for newest-cni-018730: {Name:mke4ea61e223de4e71dff13c842eb038a598c816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:00.891004  503305 start.go:364] duration metric: took 78.458µs to acquireMachinesLock for "newest-cni-018730"
	I1020 13:25:00.891029  503305 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:25:00.891035  503305 fix.go:54] fixHost starting: 
	I1020 13:25:00.891475  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:00.910819  503305 fix.go:112] recreateIfNeeded on newest-cni-018730: state=Stopped err=<nil>
	W1020 13:25:00.910846  503305 fix.go:138] unexpected machine state, will restart: <nil>
	W1020 13:24:59.857035  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:01.857244  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:25:00.914205  503305 out.go:252] * Restarting existing docker container for "newest-cni-018730" ...
	I1020 13:25:00.914296  503305 cli_runner.go:164] Run: docker start newest-cni-018730
	I1020 13:25:01.210684  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:01.233786  503305 kic.go:430] container "newest-cni-018730" state is running.
	I1020 13:25:01.234195  503305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:25:01.257700  503305 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/config.json ...
	I1020 13:25:01.257946  503305 machine.go:93] provisionDockerMachine start ...
	I1020 13:25:01.258013  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:01.279352  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:01.279897  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:01.279911  503305 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:25:01.281035  503305 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:25:04.432286  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-018730
	
	I1020 13:25:04.432318  503305 ubuntu.go:182] provisioning hostname "newest-cni-018730"
	I1020 13:25:04.432416  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:04.450360  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:04.450682  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:04.450698  503305 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-018730 && echo "newest-cni-018730" | sudo tee /etc/hostname
	I1020 13:25:04.614694  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-018730
	
	I1020 13:25:04.614778  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:04.633067  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:04.633385  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:04.633408  503305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-018730' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-018730/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-018730' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:25:04.784665  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:25:04.784755  503305 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:25:04.784788  503305 ubuntu.go:190] setting up certificates
	I1020 13:25:04.784798  503305 provision.go:84] configureAuth start
	I1020 13:25:04.784859  503305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:25:04.802697  503305 provision.go:143] copyHostCerts
	I1020 13:25:04.802768  503305 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:25:04.802814  503305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:25:04.802916  503305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:25:04.803025  503305 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:25:04.803033  503305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:25:04.803059  503305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:25:04.803116  503305 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:25:04.803121  503305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:25:04.803143  503305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:25:04.803189  503305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.newest-cni-018730 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-018730]
	I1020 13:25:05.176966  503305 provision.go:177] copyRemoteCerts
	I1020 13:25:05.177041  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:25:05.177093  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.198207  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:05.304142  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:25:05.322719  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:25:05.342992  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:25:05.363992  503305 provision.go:87] duration metric: took 579.180021ms to configureAuth
	I1020 13:25:05.364020  503305 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:25:05.364218  503305 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:05.364319  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.381017  503305 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:05.381327  503305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1020 13:25:05.381346  503305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:25:05.677588  503305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:25:05.677619  503305 machine.go:96] duration metric: took 4.419662652s to provisionDockerMachine
	I1020 13:25:05.677631  503305 start.go:293] postStartSetup for "newest-cni-018730" (driver="docker")
	I1020 13:25:05.677660  503305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:25:05.677726  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:25:05.677773  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.694937  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:05.804600  503305 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:25:05.808448  503305 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:25:05.808487  503305 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:25:05.808516  503305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:25:05.808596  503305 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:25:05.808716  503305 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:25:05.808841  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:25:05.816681  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:25:05.835830  503305 start.go:296] duration metric: took 158.182209ms for postStartSetup
	I1020 13:25:05.835913  503305 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:25:05.835964  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.852951  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:05.958342  503305 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:25:05.963082  503305 fix.go:56] duration metric: took 5.072040491s for fixHost
	I1020 13:25:05.963113  503305 start.go:83] releasing machines lock for "newest-cni-018730", held for 5.072097026s
	I1020 13:25:05.963180  503305 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-018730
	I1020 13:25:05.979934  503305 ssh_runner.go:195] Run: cat /version.json
	I1020 13:25:05.979958  503305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:25:05.979998  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.980010  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:05.997948  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:06.016025  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:06.104614  503305 ssh_runner.go:195] Run: systemctl --version
	I1020 13:25:06.199037  503305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:25:06.238844  503305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:25:06.243275  503305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:25:06.243355  503305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:25:06.251470  503305 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:25:06.251506  503305 start.go:495] detecting cgroup driver to use...
	I1020 13:25:06.251556  503305 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:25:06.251637  503305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:25:06.266898  503305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:25:06.279826  503305 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:25:06.279891  503305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:25:06.295784  503305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:25:06.308894  503305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:25:06.435531  503305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:25:06.560528  503305 docker.go:234] disabling docker service ...
	I1020 13:25:06.560644  503305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:25:06.577733  503305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:25:06.591204  503305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:25:06.712633  503305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:25:06.836146  503305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:25:06.851479  503305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:25:06.871697  503305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:25:06.871768  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.881724  503305 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:25:06.881805  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.897272  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.906783  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.916389  503305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:25:06.925237  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.934468  503305 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.944498  503305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:06.953412  503305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:25:06.961169  503305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:25:06.969238  503305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:25:07.090835  503305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:25:07.220606  503305 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:25:07.220690  503305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:25:07.224831  503305 start.go:563] Will wait 60s for crictl version
	I1020 13:25:07.224905  503305 ssh_runner.go:195] Run: which crictl
	I1020 13:25:07.228470  503305 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:25:07.255227  503305 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:25:07.255378  503305 ssh_runner.go:195] Run: crio --version
	I1020 13:25:07.284944  503305 ssh_runner.go:195] Run: crio --version
	I1020 13:25:07.317945  503305 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:25:07.320765  503305 cli_runner.go:164] Run: docker network inspect newest-cni-018730 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:25:07.335081  503305 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1020 13:25:07.338893  503305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:25:07.351541  503305 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1020 13:25:07.354463  503305 kubeadm.go:883] updating cluster {Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:25:07.354609  503305 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:07.354699  503305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:25:07.390615  503305 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:25:07.390639  503305 crio.go:433] Images already preloaded, skipping extraction
	I1020 13:25:07.390700  503305 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:25:07.419365  503305 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:25:07.419387  503305 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:25:07.419398  503305 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1020 13:25:07.419506  503305 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-018730 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:25:07.419599  503305 ssh_runner.go:195] Run: crio config
	I1020 13:25:07.493035  503305 cni.go:84] Creating CNI manager for ""
	I1020 13:25:07.493059  503305 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:25:07.493112  503305 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1020 13:25:07.493145  503305 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-018730 NodeName:newest-cni-018730 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:25:07.493286  503305 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-018730"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:25:07.493359  503305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:25:07.501807  503305 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:25:07.501876  503305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:25:07.509955  503305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 13:25:07.523792  503305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:25:07.536820  503305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1020 13:25:07.549820  503305 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:25:07.554619  503305 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:25:07.564881  503305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:25:07.690115  503305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:25:07.706306  503305 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730 for IP: 192.168.85.2
	I1020 13:25:07.706329  503305 certs.go:195] generating shared ca certs ...
	I1020 13:25:07.706344  503305 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:07.706488  503305 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:25:07.706538  503305 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:25:07.706548  503305 certs.go:257] generating profile certs ...
	I1020 13:25:07.706629  503305 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/client.key
	I1020 13:25:07.706695  503305 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key.b19b56d0
	I1020 13:25:07.706737  503305 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key
	I1020 13:25:07.706844  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:25:07.706878  503305 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:25:07.706893  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:25:07.706923  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:25:07.706955  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:25:07.706981  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:25:07.707024  503305 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:25:07.708149  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:25:07.732641  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:25:07.750867  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:25:07.776526  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:25:07.797968  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 13:25:07.817498  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:25:07.842348  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:25:07.867160  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/newest-cni-018730/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1020 13:25:07.898838  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:25:07.921904  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:25:07.941580  503305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:25:07.960804  503305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:25:07.977225  503305 ssh_runner.go:195] Run: openssl version
	I1020 13:25:07.983737  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:25:07.993219  503305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:25:07.999008  503305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:25:07.999120  503305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:25:08.054164  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:25:08.062683  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:25:08.073083  503305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:25:08.077213  503305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:25:08.077311  503305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:25:08.119123  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:25:08.127227  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:25:08.136003  503305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:25:08.139995  503305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:25:08.140062  503305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:25:08.182192  503305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:25:08.190566  503305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:25:08.194530  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:25:08.236001  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:25:08.277668  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:25:08.320909  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:25:08.369803  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:25:08.426396  503305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:25:08.510302  503305 kubeadm.go:400] StartCluster: {Name:newest-cni-018730 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-018730 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:08.510453  503305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:25:08.510565  503305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:25:08.580496  503305 cri.go:89] found id: "143b6525a7ad6f934e45380f7976365e8a6ae6ec4b65dc75abf1434765d1f818"
	I1020 13:25:08.580579  503305 cri.go:89] found id: "77cd44a58f82396ce834a1fa844454d78fc45a286cf2263f2d5b938131f28f2a"
	I1020 13:25:08.580602  503305 cri.go:89] found id: "6829ea5db474ed23b6f22c417c519ff9551292248dec661b2aef9bb5a0d11186"
	I1020 13:25:08.580640  503305 cri.go:89] found id: "ffe7075b79fb761eb811ac31f88b276f449cc72f039fc13d368ca9f41e9b8932"
	I1020 13:25:08.580662  503305 cri.go:89] found id: ""
	I1020 13:25:08.580751  503305 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:25:08.597773  503305 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:25:08Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:25:08.597902  503305 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:25:08.612251  503305 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:25:08.612325  503305 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:25:08.612453  503305 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:25:08.624697  503305 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:25:08.625407  503305 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-018730" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:08.625751  503305 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-018730" cluster setting kubeconfig missing "newest-cni-018730" context setting]
	I1020 13:25:08.626313  503305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:08.628093  503305 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:25:08.641642  503305 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1020 13:25:08.641725  503305 kubeadm.go:601] duration metric: took 29.369719ms to restartPrimaryControlPlane
	I1020 13:25:08.641748  503305 kubeadm.go:402] duration metric: took 131.456374ms to StartCluster
	I1020 13:25:08.641803  503305 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:08.641895  503305 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:08.642953  503305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:08.643238  503305 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:25:08.643744  503305 config.go:182] Loaded profile config "newest-cni-018730": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:08.643765  503305 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:25:08.643844  503305 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-018730"
	I1020 13:25:08.643865  503305 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-018730"
	W1020 13:25:08.643877  503305 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:25:08.643888  503305 addons.go:69] Setting dashboard=true in profile "newest-cni-018730"
	I1020 13:25:08.643985  503305 addons.go:238] Setting addon dashboard=true in "newest-cni-018730"
	W1020 13:25:08.644009  503305 addons.go:247] addon dashboard should already be in state true
	I1020 13:25:08.644043  503305 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:08.644672  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.643904  503305 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:08.643912  503305 addons.go:69] Setting default-storageclass=true in profile "newest-cni-018730"
	I1020 13:25:08.645400  503305 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-018730"
	I1020 13:25:08.645518  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.645683  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.651190  503305 out.go:179] * Verifying Kubernetes components...
	I1020 13:25:08.659203  503305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:25:08.713935  503305 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:25:08.714085  503305 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:25:08.716972  503305 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:25:08.716995  503305 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:25:08.717061  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:08.719627  503305 addons.go:238] Setting addon default-storageclass=true in "newest-cni-018730"
	W1020 13:25:08.719654  503305 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:25:08.719680  503305 host.go:66] Checking if "newest-cni-018730" exists ...
	I1020 13:25:08.720089  503305 cli_runner.go:164] Run: docker container inspect newest-cni-018730 --format={{.State.Status}}
	I1020 13:25:08.720311  503305 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1020 13:25:04.357222  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:06.856642  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:08.857074  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:25:08.723240  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:25:08.723269  503305 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:25:08.723331  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:08.758251  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:08.780574  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:08.785897  503305 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:25:08.785925  503305 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:25:08.785991  503305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-018730
	I1020 13:25:08.809653  503305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/newest-cni-018730/id_rsa Username:docker}
	I1020 13:25:09.013364  503305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:25:09.069659  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:25:09.069733  503305 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:25:09.072489  503305 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:25:09.072610  503305 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:25:09.076954  503305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:25:09.114891  503305 api_server.go:72] duration metric: took 471.590944ms to wait for apiserver process to appear ...
	I1020 13:25:09.114964  503305 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:25:09.115010  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:09.133357  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:25:09.133433  503305 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:25:09.192028  503305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:25:09.219008  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:25:09.219091  503305 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:25:09.286893  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:25:09.286963  503305 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:25:09.367613  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:25:09.367691  503305 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:25:09.418665  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:25:09.418740  503305 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:25:09.434415  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:25:09.434489  503305 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:25:09.449210  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:25:09.449289  503305 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:25:09.469368  503305 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:25:09.469437  503305 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:25:09.485269  503305 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1020 13:25:10.857187  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:13.356882  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:25:14.117288  503305 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1020 13:25:14.117381  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:14.873027  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 13:25:14.873104  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 13:25:14.873134  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:14.967771  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1020 13:25:14.967851  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1020 13:25:15.116075  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:15.288105  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:15.288190  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:15.615904  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:15.657502  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:15.657583  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:16.116098  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:16.146221  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:16.146298  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:16.615103  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:16.680014  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:25:16.680110  503305 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:25:16.696563  503305 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.619537247s)
	I1020 13:25:16.696689  503305 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.504589664s)
	I1020 13:25:16.804705  503305 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.319308208s)
	I1020 13:25:16.807951  503305 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-018730 addons enable metrics-server
	
	I1020 13:25:16.810872  503305 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1020 13:25:16.813654  503305 addons.go:514] duration metric: took 8.169879499s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1020 13:25:17.115118  503305 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:25:17.125244  503305 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:25:17.126354  503305 api_server.go:141] control plane version: v1.34.1
	I1020 13:25:17.126404  503305 api_server.go:131] duration metric: took 8.011406401s to wait for apiserver health ...
	I1020 13:25:17.126428  503305 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:25:17.130046  503305 system_pods.go:59] 8 kube-system pods found
	I1020 13:25:17.130129  503305 system_pods.go:61] "coredns-66bc5c9577-sjxcr" [8a89d2c7-108a-4c4b-9f12-c918862fa04a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 13:25:17.130161  503305 system_pods.go:61] "etcd-newest-cni-018730" [19f3b3ce-69b4-4765-b079-797644b0d529] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:25:17.130203  503305 system_pods.go:61] "kindnet-znl5b" [5a6ca5b2-16cc-4b03-a59f-b1867665c8c8] Running
	I1020 13:25:17.130233  503305 system_pods.go:61] "kube-apiserver-newest-cni-018730" [ba8650fa-d31b-4e19-b6ed-806262274ee3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:25:17.130259  503305 system_pods.go:61] "kube-controller-manager-newest-cni-018730" [f4e1cbc5-da99-4dcc-8fa1-465324888375] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:25:17.130295  503305 system_pods.go:61] "kube-proxy-cfrgk" [2b049f68-632a-4288-9c43-da5c3d72e46f] Running
	I1020 13:25:17.130323  503305 system_pods.go:61] "kube-scheduler-newest-cni-018730" [0b6f56c6-4faa-479c-b013-20b4ea5e1c5a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:25:17.130348  503305 system_pods.go:61] "storage-provisioner" [81054ccb-6cd7-47f4-8244-a5a5df3d6ca1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1020 13:25:17.130388  503305 system_pods.go:74] duration metric: took 3.94004ms to wait for pod list to return data ...
	I1020 13:25:17.130416  503305 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:25:17.137175  503305 default_sa.go:45] found service account: "default"
	I1020 13:25:17.137252  503305 default_sa.go:55] duration metric: took 6.80123ms for default service account to be created ...
	I1020 13:25:17.137280  503305 kubeadm.go:586] duration metric: took 8.493984206s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1020 13:25:17.137327  503305 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:25:17.142609  503305 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:25:17.142687  503305 node_conditions.go:123] node cpu capacity is 2
	I1020 13:25:17.142714  503305 node_conditions.go:105] duration metric: took 5.362532ms to run NodePressure ...
	I1020 13:25:17.142739  503305 start.go:241] waiting for startup goroutines ...
	I1020 13:25:17.142774  503305 start.go:246] waiting for cluster config update ...
	I1020 13:25:17.142804  503305 start.go:255] writing updated cluster config ...
	I1020 13:25:17.143163  503305 ssh_runner.go:195] Run: rm -f paused
	I1020 13:25:17.239646  503305 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:25:17.242883  503305 out.go:179] * Done! kubectl is now configured to use "newest-cni-018730" cluster and "default" namespace by default
	W1020 13:25:15.856640  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	W1020 13:25:18.355894  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.14252315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.153985915Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=caa14aae-e220-4afb-83c7-717740321959 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.163037957Z" level=info msg="Ran pod sandbox 1074c3f4c1aeb77198cd094983904e1e90cb5846d651df5194d51481a0d14866 with infra container: kube-system/kube-proxy-cfrgk/POD" id=caa14aae-e220-4afb-83c7-717740321959 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.169251447Z" level=info msg="Running pod sandbox: kube-system/kindnet-znl5b/POD" id=18819081-9090-4db0-93a8-6204b8c95b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.169313323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.176658758Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=18819081-9090-4db0-93a8-6204b8c95b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.195374674Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=72cfb100-3471-4fb7-913a-b511c5a09559 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.205288627Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=58f5a987-5baa-4004-a5d0-31a24b809ca9 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.211208894Z" level=info msg="Ran pod sandbox 8cc8ffbaba43935a3b19722c0f1fe8549ac195b6cd2fb35497b251936765723d with infra container: kube-system/kindnet-znl5b/POD" id=18819081-9090-4db0-93a8-6204b8c95b71 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.215888612Z" level=info msg="Creating container: kube-system/kube-proxy-cfrgk/kube-proxy" id=fdf80c34-b324-4f75-84f8-b9eef6669c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.215991292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.224857568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.232434948Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d15442c2-4a2c-4542-85b6-47def1cf5586 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.232896017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.2440902Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f7e9b032-8a65-4908-b4e9-6907330f890f name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.246223874Z" level=info msg="Creating container: kube-system/kindnet-znl5b/kindnet-cni" id=ffa9b90f-41ef-422c-a27f-7484fee16303 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.246471721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.259816465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.26058134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.292570573Z" level=info msg="Created container 58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785: kube-system/kindnet-znl5b/kindnet-cni" id=ffa9b90f-41ef-422c-a27f-7484fee16303 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.293181108Z" level=info msg="Starting container: 58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785" id=7cfce843-ac64-402f-b9d6-0d7ec3c4cbbe name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.294788662Z" level=info msg="Started container" PID=1063 containerID=58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785 description=kube-system/kindnet-znl5b/kindnet-cni id=7cfce843-ac64-402f-b9d6-0d7ec3c4cbbe name=/runtime.v1.RuntimeService/StartContainer sandboxID=8cc8ffbaba43935a3b19722c0f1fe8549ac195b6cd2fb35497b251936765723d
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.360913084Z" level=info msg="Created container 8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea: kube-system/kube-proxy-cfrgk/kube-proxy" id=fdf80c34-b324-4f75-84f8-b9eef6669c62 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.364577371Z" level=info msg="Starting container: 8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea" id=3e01f8f9-0a58-4a1c-a2cb-e19418d24049 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:25:16 newest-cni-018730 crio[612]: time="2025-10-20T13:25:16.371993863Z" level=info msg="Started container" PID=1059 containerID=8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea description=kube-system/kube-proxy-cfrgk/kube-proxy id=3e01f8f9-0a58-4a1c-a2cb-e19418d24049 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1074c3f4c1aeb77198cd094983904e1e90cb5846d651df5194d51481a0d14866
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	58257a846c41e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   8cc8ffbaba439       kindnet-znl5b                               kube-system
	8b905eb177a0f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   1074c3f4c1aeb       kube-proxy-cfrgk                            kube-system
	143b6525a7ad6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   bcdea74c09f21       kube-apiserver-newest-cni-018730            kube-system
	77cd44a58f823       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   971a9e1d9b991       kube-controller-manager-newest-cni-018730   kube-system
	6829ea5db474e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   1f862917273a9       kube-scheduler-newest-cni-018730            kube-system
	ffe7075b79fb7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   8e953928f9de5       etcd-newest-cni-018730                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-018730
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-018730
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=newest-cni-018730
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_24_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:24:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-018730
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:25:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 20 Oct 2025 13:25:15 +0000   Mon, 20 Oct 2025 13:24:41 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-018730
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                02857502-1595-48f5-a221-2258d77f161c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-018730                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-znl5b                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-018730             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-018730    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-cfrgk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-018730             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-018730 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-018730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-018730 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-018730 event: Registered Node newest-cni-018730 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x9 over 16s)  kubelet          Node newest-cni-018730 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-018730 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x7 over 16s)  kubelet          Node newest-cni-018730 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-018730 event: Registered Node newest-cni-018730 in Controller
	
	
	==> dmesg <==
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	[Oct20 13:24] overlayfs: idmapped layers are currently not supported
	[Oct20 13:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ffe7075b79fb761eb811ac31f88b276f449cc72f039fc13d368ca9f41e9b8932] <==
	{"level":"warn","ts":"2025-10-20T13:25:12.603613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.673571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.673989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.708860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.729032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.748346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.768661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.800110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.823947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.840963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.893511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.914926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.935947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.966595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:12.986575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.055482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.062755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.095842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.119065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.156230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.177286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.205174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.231317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.270593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:25:13.408393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41194","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:25:23 up  3:07,  0 user,  load average: 3.91, 2.96, 2.60
	Linux newest-cni-018730 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58257a846c41eaf3b4d13213924c89be7e443bf3fd15c469a411e83615ded785] <==
	I1020 13:25:16.403148       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:25:16.403436       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1020 13:25:16.403600       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:25:16.403612       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:25:16.403622       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:25:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:25:16.612657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:25:16.612735       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:25:16.612767       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:25:16.618580       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [143b6525a7ad6f934e45380f7976365e8a6ae6ec4b65dc75abf1434765d1f818] <==
	I1020 13:25:15.069699       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1020 13:25:15.069739       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:25:15.094907       1 aggregator.go:171] initial CRD sync complete...
	I1020 13:25:15.094945       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 13:25:15.094954       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:25:15.094961       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:25:15.104522       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1020 13:25:15.107229       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:25:15.107267       1 policy_source.go:240] refreshing policies
	I1020 13:25:15.111320       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1020 13:25:15.111348       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1020 13:25:15.137788       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1020 13:25:15.322272       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:25:15.783797       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:25:16.041495       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:25:16.257585       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:25:16.418453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:25:16.525902       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:25:16.578161       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:25:16.773665       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.229.87"}
	I1020 13:25:16.798168       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.201.146"}
	I1020 13:25:19.771840       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:25:19.813015       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:25:19.884904       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 13:25:19.960562       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [77cd44a58f82396ce834a1fa844454d78fc45a286cf2263f2d5b938131f28f2a] <==
	I1020 13:25:19.306410       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:25:19.308641       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1020 13:25:19.310351       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1020 13:25:19.313747       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1020 13:25:19.318870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:25:19.318981       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:25:19.319060       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-018730"
	I1020 13:25:19.319106       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 13:25:19.321164       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:25:19.326205       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:25:19.326746       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:25:19.332406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:25:19.332436       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:25:19.332444       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:25:19.332529       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:25:19.333950       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1020 13:25:19.334182       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:25:19.347774       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:25:19.350436       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:25:19.358428       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 13:25:19.355797       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:25:19.355935       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:25:19.356000       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:25:19.353878       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1020 13:25:19.361207       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	
	
	==> kube-proxy [8b905eb177a0fd99c4dea01ed32ad7ba2becb567199875e548364d3090aac4ea] <==
	I1020 13:25:16.791761       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:25:16.953452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:25:17.056485       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:25:17.056619       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1020 13:25:17.056732       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:25:17.108863       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:25:17.109017       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:25:17.115005       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:25:17.115479       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:25:17.123704       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:25:17.131453       1 config.go:200] "Starting service config controller"
	I1020 13:25:17.131482       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:25:17.131497       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:25:17.131501       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:25:17.131509       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:25:17.131515       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:25:17.132136       1 config.go:309] "Starting node config controller"
	I1020 13:25:17.132158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:25:17.132164       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:25:17.231875       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:25:17.231982       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:25:17.232022       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6829ea5db474ed23b6f22c417c519ff9551292248dec661b2aef9bb5a0d11186] <==
	I1020 13:25:14.655048       1 serving.go:386] Generated self-signed cert in-memory
	I1020 13:25:17.393171       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:25:17.393311       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:25:17.399113       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:25:17.399790       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 13:25:17.399849       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 13:25:17.399909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:25:17.413290       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:25:17.413327       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:25:17.413368       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:25:17.413376       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:25:17.500484       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 13:25:17.513715       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:25:17.513831       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.235371     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.338784     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.338927     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.338973     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.340475     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.362789     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-018730\" already exists" pod="kube-system/etcd-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.362822     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.387393     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-018730\" already exists" pod="kube-system/kube-apiserver-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.387448     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.433055     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-018730\" already exists" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.433114     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.457571     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-018730\" already exists" pod="kube-system/kube-scheduler-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.553475     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: E1020 13:25:15.593572     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-018730\" already exists" pod="kube-system/kube-controller-manager-newest-cni-018730"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.830333     728 apiserver.go:52] "Watching apiserver"
	Oct 20 13:25:15 newest-cni-018730 kubelet[728]: I1020 13:25:15.938542     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015103     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-xtables-lock\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015159     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b049f68-632a-4288-9c43-da5c3d72e46f-lib-modules\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015179     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-cni-cfg\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015203     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a6ca5b2-16cc-4b03-a59f-b1867665c8c8-lib-modules\") pod \"kindnet-znl5b\" (UID: \"5a6ca5b2-16cc-4b03-a59f-b1867665c8c8\") " pod="kube-system/kindnet-znl5b"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.015245     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b049f68-632a-4288-9c43-da5c3d72e46f-xtables-lock\") pod \"kube-proxy-cfrgk\" (UID: \"2b049f68-632a-4288-9c43-da5c3d72e46f\") " pod="kube-system/kube-proxy-cfrgk"
	Oct 20 13:25:16 newest-cni-018730 kubelet[728]: I1020 13:25:16.080295     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:25:18 newest-cni-018730 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:25:18 newest-cni-018730 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:25:18 newest-cni-018730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-018730 -n newest-cni-018730
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-018730 -n newest-cni-018730: exit status 2 (402.89596ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-018730 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj: exit status 1 (87.394662ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-sjxcr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-qhv5v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-w2lzj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-018730 describe pod coredns-66bc5c9577-sjxcr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-qhv5v kubernetes-dashboard-855c9754f9-w2lzj: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-744804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-744804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (339.192113ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:25:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-744804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-744804 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-744804 describe deploy/metrics-server -n kube-system: exit status 1 (131.342194ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-744804 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-744804
helpers_test.go:243: (dbg) docker inspect no-preload-744804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41",
	        "Created": "2025-10-20T13:23:35.394425539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:23:35.473627221Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/hostname",
	        "HostsPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/hosts",
	        "LogPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41-json.log",
	        "Name": "/no-preload-744804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-744804:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-744804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41",
	                "LowerDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-744804",
	                "Source": "/var/lib/docker/volumes/no-preload-744804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-744804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-744804",
	                "name.minikube.sigs.k8s.io": "no-preload-744804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6bd4f5b165381f6114113d3e8f9902fc192f0f32df866ad4cc9095506d46af13",
	            "SandboxKey": "/var/run/docker/netns/6bd4f5b16538",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-744804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:14:d2:b2:76:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "307dee052f6f076bff152f38e429e93b9787d013b30129b59f6e7b891323decf",
	                    "EndpointID": "d904d3d9e12229d5b1f30406c2d9a1d96f55bcbf5f048621dc20a25f8c2fea70",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-744804",
	                        "7c7d00bb470e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-744804 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-744804 logs -n 25: (1.548773427s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-979197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │                     │
	│ stop    │ -p embed-certs-979197 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ addons  │ enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:22 UTC │
	│ start   │ -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:22 UTC │ 20 Oct 25 13:23 UTC │
	│ image   │ default-k8s-diff-port-794175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ pause   │ -p default-k8s-diff-port-794175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ stop    │ -p newest-cni-018730 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-018730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ newest-cni-018730 image list --format=json                                                                                                                                                                                                    │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ pause   │ -p newest-cni-018730 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	│ delete  │ -p newest-cni-018730                                                                                                                                                                                                                          │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ delete  │ -p newest-cni-018730                                                                                                                                                                                                                          │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p auto-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-308474                  │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-744804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:25:26
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:25:26.571487  506566 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:25:26.571758  506566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:26.571789  506566 out.go:374] Setting ErrFile to fd 2...
	I1020 13:25:26.571812  506566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:26.572107  506566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:25:26.572733  506566 out.go:368] Setting JSON to false
	I1020 13:25:26.573868  506566 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11277,"bootTime":1760955450,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:25:26.574001  506566 start.go:141] virtualization:  
	I1020 13:25:26.577816  506566 out.go:179] * [auto-308474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:25:26.581861  506566 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:25:26.581938  506566 notify.go:220] Checking for updates...
	I1020 13:25:26.588140  506566 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:25:26.591151  506566 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:26.594069  506566 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:25:26.597066  506566 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:25:26.599972  506566 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:25:26.603468  506566 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:26.603570  506566 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:25:26.626839  506566 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:25:26.626977  506566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:26.693371  506566 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:25:26.683515407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:26.693477  506566 docker.go:318] overlay module found
	I1020 13:25:26.696453  506566 out.go:179] * Using the docker driver based on user configuration
	I1020 13:25:26.699283  506566 start.go:305] selected driver: docker
	I1020 13:25:26.699302  506566 start.go:925] validating driver "docker" against <nil>
	I1020 13:25:26.699316  506566 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:25:26.700051  506566 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:26.772887  506566 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:25:26.759627803 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:26.773158  506566 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 13:25:26.773494  506566 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:25:26.776442  506566 out.go:179] * Using Docker driver with root privileges
	I1020 13:25:26.779305  506566 cni.go:84] Creating CNI manager for ""
	I1020 13:25:26.779378  506566 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:25:26.779392  506566 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 13:25:26.779494  506566 start.go:349] cluster config:
	{Name:auto-308474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-308474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1020 13:25:26.784309  506566 out.go:179] * Starting "auto-308474" primary control-plane node in "auto-308474" cluster
	I1020 13:25:26.787090  506566 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:25:26.790109  506566 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:25:26.792975  506566 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:26.793004  506566 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:25:26.793031  506566 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 13:25:26.793056  506566 cache.go:58] Caching tarball of preloaded images
	I1020 13:25:26.793148  506566 preload.go:233] Found /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1020 13:25:26.793160  506566 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1020 13:25:26.793281  506566 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/config.json ...
	I1020 13:25:26.793322  506566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/config.json: {Name:mk0fe8ba41787e5c3442fc948461d474c229bd8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:25:26.814282  506566 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:25:26.814303  506566 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:25:26.814318  506566 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:25:26.814341  506566 start.go:360] acquireMachinesLock for auto-308474: {Name:mkaef615866bb72a2582ccbcf15defac8135c78e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:26.814442  506566 start.go:364] duration metric: took 86.343µs to acquireMachinesLock for "auto-308474"
	I1020 13:25:26.814468  506566 start.go:93] Provisioning new machine with config: &{Name:auto-308474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-308474 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:25:26.814540  506566 start.go:125] createHost starting for "" (driver="docker")
	W1020 13:25:24.358026  495732 node_ready.go:57] node "no-preload-744804" has "Ready":"False" status (will retry)
	I1020 13:25:25.860194  495732 node_ready.go:49] node "no-preload-744804" is "Ready"
	I1020 13:25:25.860221  495732 node_ready.go:38] duration metric: took 57.506868431s for node "no-preload-744804" to be "Ready" ...
	I1020 13:25:25.860234  495732 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:25:25.860296  495732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:25:25.904742  495732 api_server.go:72] duration metric: took 58.821056487s to wait for apiserver process to appear ...
	I1020 13:25:25.904764  495732 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:25:25.904808  495732 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:25:25.932009  495732 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:25:25.937332  495732 api_server.go:141] control plane version: v1.34.1
	I1020 13:25:25.937359  495732 api_server.go:131] duration metric: took 32.589143ms to wait for apiserver health ...
	I1020 13:25:25.937368  495732 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:25:25.945917  495732 system_pods.go:59] 8 kube-system pods found
	I1020 13:25:25.946024  495732 system_pods.go:61] "coredns-66bc5c9577-czxmg" [dfe5480f-3c87-4f50-8890-9aeb8740860b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:25:25.946034  495732 system_pods.go:61] "etcd-no-preload-744804" [861cd06e-ae97-40a2-94f3-c36f118ae148] Running
	I1020 13:25:25.946041  495732 system_pods.go:61] "kindnet-tqpf7" [d65258f0-f2a5-4c71-910b-d148291111ae] Running
	I1020 13:25:25.946045  495732 system_pods.go:61] "kube-apiserver-no-preload-744804" [5045b24e-f1ef-4e65-938c-3999ea03c565] Running
	I1020 13:25:25.946050  495732 system_pods.go:61] "kube-controller-manager-no-preload-744804" [f842efbf-e39d-4c96-b2d2-14918e2a33a6] Running
	I1020 13:25:25.946054  495732 system_pods.go:61] "kube-proxy-bv8x8" [835b8b0c-6e21-43be-9656-1e09387eab43] Running
	I1020 13:25:25.946058  495732 system_pods.go:61] "kube-scheduler-no-preload-744804" [469f86bf-dc90-42fe-9d33-901b8c97aabc] Running
	I1020 13:25:25.946073  495732 system_pods.go:61] "storage-provisioner" [31880320-20a8-4cbe-b5c2-4b1a321c8501] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:25:25.946081  495732 system_pods.go:74] duration metric: took 8.707193ms to wait for pod list to return data ...
	I1020 13:25:25.946089  495732 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:25:25.949877  495732 default_sa.go:45] found service account: "default"
	I1020 13:25:25.949898  495732 default_sa.go:55] duration metric: took 3.802446ms for default service account to be created ...
	I1020 13:25:25.949908  495732 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:25:25.960259  495732 system_pods.go:86] 8 kube-system pods found
	I1020 13:25:25.960294  495732 system_pods.go:89] "coredns-66bc5c9577-czxmg" [dfe5480f-3c87-4f50-8890-9aeb8740860b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:25:25.960300  495732 system_pods.go:89] "etcd-no-preload-744804" [861cd06e-ae97-40a2-94f3-c36f118ae148] Running
	I1020 13:25:25.960306  495732 system_pods.go:89] "kindnet-tqpf7" [d65258f0-f2a5-4c71-910b-d148291111ae] Running
	I1020 13:25:25.960310  495732 system_pods.go:89] "kube-apiserver-no-preload-744804" [5045b24e-f1ef-4e65-938c-3999ea03c565] Running
	I1020 13:25:25.960315  495732 system_pods.go:89] "kube-controller-manager-no-preload-744804" [f842efbf-e39d-4c96-b2d2-14918e2a33a6] Running
	I1020 13:25:25.960322  495732 system_pods.go:89] "kube-proxy-bv8x8" [835b8b0c-6e21-43be-9656-1e09387eab43] Running
	I1020 13:25:25.960328  495732 system_pods.go:89] "kube-scheduler-no-preload-744804" [469f86bf-dc90-42fe-9d33-901b8c97aabc] Running
	I1020 13:25:25.960334  495732 system_pods.go:89] "storage-provisioner" [31880320-20a8-4cbe-b5c2-4b1a321c8501] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:25:25.960356  495732 retry.go:31] will retry after 287.132326ms: missing components: kube-dns
	I1020 13:25:26.256195  495732 system_pods.go:86] 8 kube-system pods found
	I1020 13:25:26.256234  495732 system_pods.go:89] "coredns-66bc5c9577-czxmg" [dfe5480f-3c87-4f50-8890-9aeb8740860b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:25:26.256243  495732 system_pods.go:89] "etcd-no-preload-744804" [861cd06e-ae97-40a2-94f3-c36f118ae148] Running
	I1020 13:25:26.256248  495732 system_pods.go:89] "kindnet-tqpf7" [d65258f0-f2a5-4c71-910b-d148291111ae] Running
	I1020 13:25:26.256252  495732 system_pods.go:89] "kube-apiserver-no-preload-744804" [5045b24e-f1ef-4e65-938c-3999ea03c565] Running
	I1020 13:25:26.256258  495732 system_pods.go:89] "kube-controller-manager-no-preload-744804" [f842efbf-e39d-4c96-b2d2-14918e2a33a6] Running
	I1020 13:25:26.256261  495732 system_pods.go:89] "kube-proxy-bv8x8" [835b8b0c-6e21-43be-9656-1e09387eab43] Running
	I1020 13:25:26.256265  495732 system_pods.go:89] "kube-scheduler-no-preload-744804" [469f86bf-dc90-42fe-9d33-901b8c97aabc] Running
	I1020 13:25:26.256268  495732 system_pods.go:89] "storage-provisioner" [31880320-20a8-4cbe-b5c2-4b1a321c8501] Running
	I1020 13:25:26.256276  495732 system_pods.go:126] duration metric: took 306.362495ms to wait for k8s-apps to be running ...
	I1020 13:25:26.256283  495732 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:25:26.256344  495732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:25:26.287495  495732 system_svc.go:56] duration metric: took 31.193768ms WaitForService to wait for kubelet
	I1020 13:25:26.287520  495732 kubeadm.go:586] duration metric: took 59.203841051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:25:26.287537  495732 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:25:26.291032  495732 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:25:26.291059  495732 node_conditions.go:123] node cpu capacity is 2
	I1020 13:25:26.291072  495732 node_conditions.go:105] duration metric: took 3.529204ms to run NodePressure ...
	I1020 13:25:26.291084  495732 start.go:241] waiting for startup goroutines ...
	I1020 13:25:26.291093  495732 start.go:246] waiting for cluster config update ...
	I1020 13:25:26.291103  495732 start.go:255] writing updated cluster config ...
	I1020 13:25:26.291391  495732 ssh_runner.go:195] Run: rm -f paused
	I1020 13:25:26.296360  495732 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:25:26.308350  495732 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czxmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.317395  495732 pod_ready.go:94] pod "coredns-66bc5c9577-czxmg" is "Ready"
	I1020 13:25:27.317425  495732 pod_ready.go:86] duration metric: took 1.00897369s for pod "coredns-66bc5c9577-czxmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.322841  495732 pod_ready.go:83] waiting for pod "etcd-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.329540  495732 pod_ready.go:94] pod "etcd-no-preload-744804" is "Ready"
	I1020 13:25:27.329571  495732 pod_ready.go:86] duration metric: took 6.702292ms for pod "etcd-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.332290  495732 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.340055  495732 pod_ready.go:94] pod "kube-apiserver-no-preload-744804" is "Ready"
	I1020 13:25:27.340126  495732 pod_ready.go:86] duration metric: took 7.806274ms for pod "kube-apiserver-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.345341  495732 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.516339  495732 pod_ready.go:94] pod "kube-controller-manager-no-preload-744804" is "Ready"
	I1020 13:25:27.516384  495732 pod_ready.go:86] duration metric: took 171.008426ms for pod "kube-controller-manager-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:27.712769  495732 pod_ready.go:83] waiting for pod "kube-proxy-bv8x8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:28.112518  495732 pod_ready.go:94] pod "kube-proxy-bv8x8" is "Ready"
	I1020 13:25:28.112548  495732 pod_ready.go:86] duration metric: took 399.756318ms for pod "kube-proxy-bv8x8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:28.312829  495732 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:28.712141  495732 pod_ready.go:94] pod "kube-scheduler-no-preload-744804" is "Ready"
	I1020 13:25:28.712173  495732 pod_ready.go:86] duration metric: took 399.313358ms for pod "kube-scheduler-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:25:28.712185  495732 pod_ready.go:40] duration metric: took 2.415783355s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:25:28.785858  495732 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:25:28.791830  495732 out.go:179] * Done! kubectl is now configured to use "no-preload-744804" cluster and "default" namespace by default
	I1020 13:25:26.819768  506566 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1020 13:25:26.820072  506566 start.go:159] libmachine.API.Create for "auto-308474" (driver="docker")
	I1020 13:25:26.820125  506566 client.go:168] LocalClient.Create starting
	I1020 13:25:26.820195  506566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem
	I1020 13:25:26.820236  506566 main.go:141] libmachine: Decoding PEM data...
	I1020 13:25:26.820253  506566 main.go:141] libmachine: Parsing certificate...
	I1020 13:25:26.820309  506566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem
	I1020 13:25:26.820331  506566 main.go:141] libmachine: Decoding PEM data...
	I1020 13:25:26.820348  506566 main.go:141] libmachine: Parsing certificate...
	I1020 13:25:26.820766  506566 cli_runner.go:164] Run: docker network inspect auto-308474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1020 13:25:26.836554  506566 cli_runner.go:211] docker network inspect auto-308474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1020 13:25:26.836654  506566 network_create.go:284] running [docker network inspect auto-308474] to gather additional debugging logs...
	I1020 13:25:26.836676  506566 cli_runner.go:164] Run: docker network inspect auto-308474
	W1020 13:25:26.852203  506566 cli_runner.go:211] docker network inspect auto-308474 returned with exit code 1
	I1020 13:25:26.852237  506566 network_create.go:287] error running [docker network inspect auto-308474]: docker network inspect auto-308474: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-308474 not found
	I1020 13:25:26.852251  506566 network_create.go:289] output of [docker network inspect auto-308474]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-308474 not found
	
	** /stderr **
	I1020 13:25:26.852350  506566 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:25:26.869861  506566 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31214b196961 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:99:57:10:1b:40} reservation:<nil>}
	I1020 13:25:26.870117  506566 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf6e9e751b4a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:16:0d:2b:68:24:bc} reservation:<nil>}
	I1020 13:25:26.870461  506566 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-076921d0625d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8a:c5:51:b1:3d:0c} reservation:<nil>}
	I1020 13:25:26.870757  506566 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-307dee052f6f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:bd:c0:83:5e:74} reservation:<nil>}
	I1020 13:25:26.871223  506566 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e9800}
	I1020 13:25:26.871248  506566 network_create.go:124] attempt to create docker network auto-308474 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1020 13:25:26.871325  506566 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-308474 auto-308474
	I1020 13:25:26.935357  506566 network_create.go:108] docker network auto-308474 192.168.85.0/24 created
	I1020 13:25:26.935392  506566 kic.go:121] calculated static IP "192.168.85.2" for the "auto-308474" container
	I1020 13:25:26.935474  506566 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1020 13:25:26.952937  506566 cli_runner.go:164] Run: docker volume create auto-308474 --label name.minikube.sigs.k8s.io=auto-308474 --label created_by.minikube.sigs.k8s.io=true
	I1020 13:25:26.976099  506566 oci.go:103] Successfully created a docker volume auto-308474
	I1020 13:25:26.976203  506566 cli_runner.go:164] Run: docker run --rm --name auto-308474-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-308474 --entrypoint /usr/bin/test -v auto-308474:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1020 13:25:27.511420  506566 oci.go:107] Successfully prepared a docker volume auto-308474
	I1020 13:25:27.511474  506566 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:27.511493  506566 kic.go:194] Starting extracting preloaded images to volume ...
	I1020 13:25:27.511571  506566 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-308474:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1020 13:25:32.074249  506566 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-308474:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.562632095s)
	I1020 13:25:32.074276  506566 kic.go:203] duration metric: took 4.562780347s to extract preloaded images to volume ...
	W1020 13:25:32.074411  506566 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1020 13:25:32.074520  506566 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1020 13:25:32.154973  506566 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-308474 --name auto-308474 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-308474 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-308474 --network auto-308474 --ip 192.168.85.2 --volume auto-308474:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1020 13:25:32.500910  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Running}}
	I1020 13:25:32.519861  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:25:32.539027  506566 cli_runner.go:164] Run: docker exec auto-308474 stat /var/lib/dpkg/alternatives/iptables
	I1020 13:25:32.601808  506566 oci.go:144] the created container "auto-308474" has a running status.
	I1020 13:25:32.601844  506566 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa...
	I1020 13:25:32.991539  506566 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1020 13:25:33.016152  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:25:33.044172  506566 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1020 13:25:33.044190  506566 kic_runner.go:114] Args: [docker exec --privileged auto-308474 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1020 13:25:33.111757  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:25:33.135198  506566 machine.go:93] provisionDockerMachine start ...
	I1020 13:25:33.135287  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:33.166084  506566 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:33.166405  506566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1020 13:25:33.166415  506566 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:25:33.168669  506566 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1020 13:25:36.320401  506566 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-308474
	
	I1020 13:25:36.320428  506566 ubuntu.go:182] provisioning hostname "auto-308474"
	I1020 13:25:36.320494  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:36.338466  506566 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:36.338788  506566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1020 13:25:36.338805  506566 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-308474 && echo "auto-308474" | sudo tee /etc/hostname
	I1020 13:25:36.498079  506566 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-308474
	
	I1020 13:25:36.498158  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:36.516686  506566 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:36.517001  506566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1020 13:25:36.517021  506566 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-308474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-308474/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-308474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:25:36.668836  506566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:25:36.668860  506566 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:25:36.668891  506566 ubuntu.go:190] setting up certificates
	I1020 13:25:36.668900  506566 provision.go:84] configureAuth start
	I1020 13:25:36.668966  506566 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-308474
	I1020 13:25:36.684841  506566 provision.go:143] copyHostCerts
	I1020 13:25:36.684915  506566 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:25:36.684925  506566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:25:36.685011  506566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:25:36.685121  506566 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:25:36.685127  506566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:25:36.685151  506566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:25:36.685205  506566 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:25:36.685209  506566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:25:36.685233  506566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:25:36.685276  506566 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.auto-308474 san=[127.0.0.1 192.168.85.2 auto-308474 localhost minikube]
	I1020 13:25:37.198816  506566 provision.go:177] copyRemoteCerts
	I1020 13:25:37.198882  506566 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:25:37.198936  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:37.216536  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:25:37.320411  506566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:25:37.338856  506566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1020 13:25:37.357206  506566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:25:37.374036  506566 provision.go:87] duration metric: took 705.122203ms to configureAuth
	I1020 13:25:37.374117  506566 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:25:37.374330  506566 config.go:182] Loaded profile config "auto-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:37.374433  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:37.391000  506566 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:37.391320  506566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1020 13:25:37.391339  506566 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:25:37.653441  506566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:25:37.653465  506566 machine.go:96] duration metric: took 4.518247081s to provisionDockerMachine
	I1020 13:25:37.653475  506566 client.go:171] duration metric: took 10.83334092s to LocalClient.Create
	I1020 13:25:37.653494  506566 start.go:167] duration metric: took 10.833427075s to libmachine.API.Create "auto-308474"
	I1020 13:25:37.653501  506566 start.go:293] postStartSetup for "auto-308474" (driver="docker")
	I1020 13:25:37.653512  506566 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:25:37.653576  506566 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:25:37.653646  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:37.671577  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:25:37.776267  506566 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:25:37.779496  506566 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:25:37.779525  506566 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:25:37.779536  506566 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:25:37.779594  506566 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:25:37.779674  506566 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:25:37.779777  506566 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:25:37.787060  506566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:25:37.803995  506566 start.go:296] duration metric: took 150.479719ms for postStartSetup
	I1020 13:25:37.804350  506566 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-308474
	I1020 13:25:37.820599  506566 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/config.json ...
	I1020 13:25:37.820879  506566 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:25:37.820929  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:37.843642  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:25:37.949659  506566 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:25:37.954417  506566 start.go:128] duration metric: took 11.139860647s to createHost
	I1020 13:25:37.954442  506566 start.go:83] releasing machines lock for "auto-308474", held for 11.139991553s
	I1020 13:25:37.954569  506566 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-308474
	I1020 13:25:37.970250  506566 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:25:37.970481  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:37.970252  506566 ssh_runner.go:195] Run: cat /version.json
	I1020 13:25:37.970767  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:25:37.990032  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:25:38.002939  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:25:38.108178  506566 ssh_runner.go:195] Run: systemctl --version
	I1020 13:25:38.199827  506566 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:25:38.237589  506566 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:25:38.242286  506566 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:25:38.242387  506566 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:25:38.271850  506566 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1020 13:25:38.271872  506566 start.go:495] detecting cgroup driver to use...
	I1020 13:25:38.271904  506566 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:25:38.271953  506566 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:25:38.292988  506566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:25:38.306682  506566 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:25:38.306791  506566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:25:38.323917  506566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:25:38.351827  506566 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:25:38.475682  506566 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:25:38.603514  506566 docker.go:234] disabling docker service ...
	I1020 13:25:38.603593  506566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:25:38.625376  506566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:25:38.639064  506566 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:25:38.763303  506566 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:25:38.893168  506566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:25:38.907413  506566 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:25:38.922005  506566 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:25:38.922099  506566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:38.930834  506566 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:25:38.930905  506566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:38.939956  506566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:38.949079  506566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:38.957497  506566 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:25:38.965651  506566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:38.974265  506566 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:38.987669  506566 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:25:38.996823  506566 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:25:39.005881  506566 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:25:39.014376  506566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:25:39.148156  506566 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:25:39.294017  506566 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:25:39.294086  506566 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:25:39.298101  506566 start.go:563] Will wait 60s for crictl version
	I1020 13:25:39.298168  506566 ssh_runner.go:195] Run: which crictl
	I1020 13:25:39.301709  506566 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:25:39.326597  506566 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:25:39.326746  506566 ssh_runner.go:195] Run: crio --version
	I1020 13:25:39.353858  506566 ssh_runner.go:195] Run: crio --version
	I1020 13:25:39.385978  506566 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 20 13:25:26 no-preload-744804 crio[839]: time="2025-10-20T13:25:26.102472719Z" level=info msg="Created container 997d7f3e4748566f3e93b7631ad7154926c821ee5a08601e047d5bd0a48b42d2: kube-system/coredns-66bc5c9577-czxmg/coredns" id=3dfc5410-c135-4426-8b5d-da313c782706 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:26 no-preload-744804 crio[839]: time="2025-10-20T13:25:26.11269221Z" level=info msg="Starting container: 997d7f3e4748566f3e93b7631ad7154926c821ee5a08601e047d5bd0a48b42d2" id=26528019-df67-437e-9d30-f21ca3d5e290 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:25:26 no-preload-744804 crio[839]: time="2025-10-20T13:25:26.117956887Z" level=info msg="Started container" PID=2500 containerID=997d7f3e4748566f3e93b7631ad7154926c821ee5a08601e047d5bd0a48b42d2 description=kube-system/coredns-66bc5c9577-czxmg/coredns id=26528019-df67-437e-9d30-f21ca3d5e290 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6749ec8fa31c20b32e8d9d4c87dc8ac00794245909db0bae92aa679bbe5d69c9
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.37509652Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2fd76bf7-9cc8-4fe6-930e-293bb8ac26c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.375167602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.386578838Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c6d6d89d53b0d85c32133ab5d043f78a8a208364077f8ada8eca78b1f1d28319 UID:751404bb-a4a7-4344-b48b-077e31d184a4 NetNS:/var/run/netns/da78df49-7385-481c-ab7e-d42af4acbde5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028529f8}] Aliases:map[]}"
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.386742328Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.401280736Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c6d6d89d53b0d85c32133ab5d043f78a8a208364077f8ada8eca78b1f1d28319 UID:751404bb-a4a7-4344-b48b-077e31d184a4 NetNS:/var/run/netns/da78df49-7385-481c-ab7e-d42af4acbde5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028529f8}] Aliases:map[]}"
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.401614493Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.409126863Z" level=info msg="Ran pod sandbox c6d6d89d53b0d85c32133ab5d043f78a8a208364077f8ada8eca78b1f1d28319 with infra container: default/busybox/POD" id=2fd76bf7-9cc8-4fe6-930e-293bb8ac26c7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.410350141Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99c07586-050e-4c29-b830-83e8b1b3e927 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.410625279Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=99c07586-050e-4c29-b830-83e8b1b3e927 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.410745395Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=99c07586-050e-4c29-b830-83e8b1b3e927 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.413573995Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5da06dac-5bec-4523-bbf4-35eba1a9213b name=/runtime.v1.ImageService/PullImage
	Oct 20 13:25:29 no-preload-744804 crio[839]: time="2025-10-20T13:25:29.41668826Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.366882589Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5da06dac-5bec-4523-bbf4-35eba1a9213b name=/runtime.v1.ImageService/PullImage
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.367552924Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f20063ab-4c90-4df3-a807-715cb7a3125b name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.371505699Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d3f98740-dac9-4326-a67c-eb9b127b09c8 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.381483866Z" level=info msg="Creating container: default/busybox/busybox" id=bdbac1c5-4dd2-4a65-80ae-15765f0b6956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.381609603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.387377943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.387850573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.407350631Z" level=info msg="Created container 72436a0ef236341e888d18f2194018b3c5508ba1664928ce5f1e1860e9b5eb90: default/busybox/busybox" id=bdbac1c5-4dd2-4a65-80ae-15765f0b6956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.409230188Z" level=info msg="Starting container: 72436a0ef236341e888d18f2194018b3c5508ba1664928ce5f1e1860e9b5eb90" id=220ebfa5-8538-4edb-a865-32e08c951533 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:25:32 no-preload-744804 crio[839]: time="2025-10-20T13:25:32.411631106Z" level=info msg="Started container" PID=2556 containerID=72436a0ef236341e888d18f2194018b3c5508ba1664928ce5f1e1860e9b5eb90 description=default/busybox/busybox id=220ebfa5-8538-4edb-a865-32e08c951533 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c6d6d89d53b0d85c32133ab5d043f78a8a208364077f8ada8eca78b1f1d28319
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	72436a0ef2363       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   c6d6d89d53b0d       busybox                                     default
	997d7f3e47485       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago       Running             coredns                   0                   6749ec8fa31c2       coredns-66bc5c9577-czxmg                    kube-system
	d36c5b5b6d1be       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      16 seconds ago       Running             storage-provisioner       0                   92fbcfe3058d7       storage-provisioner                         kube-system
	40baa08f39c14       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    27 seconds ago       Running             kindnet-cni               0                   2a779fffe607d       kindnet-tqpf7                               kube-system
	483c9426c44db       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      About a minute ago   Running             kube-proxy                0                   c9182e292b375       kube-proxy-bv8x8                            kube-system
	4f2d38b704fdf       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   a3970f0d5793e       kube-controller-manager-no-preload-744804   kube-system
	98061f8eebb45       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   4c610d689eb08       kube-apiserver-no-preload-744804            kube-system
	ed97e32daa855       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   7562dcc3f8600       etcd-no-preload-744804                      kube-system
	febd69457b29e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   ed76e48f74c1b       kube-scheduler-no-preload-744804            kube-system
	
	
	==> coredns [997d7f3e4748566f3e93b7631ad7154926c821ee5a08601e047d5bd0a48b42d2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55312 - 6360 "HINFO IN 3208758530011051491.6550014579083169132. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016240175s
	
	
	==> describe nodes <==
	Name:               no-preload-744804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-744804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=no-preload-744804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_24_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:24:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-744804
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:25:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:25:25 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:25:25 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:25:25 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:25:25 +0000   Mon, 20 Oct 2025 13:25:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-744804
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e6ebf1aa-cf6a-460e-af7e-a66b26d17d7c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-czxmg                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-no-preload-744804                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         80s
	  kube-system                 kindnet-tqpf7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-no-preload-744804             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-no-preload-744804    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-bv8x8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-no-preload-744804             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node no-preload-744804 event: Registered Node no-preload-744804 in Controller
	  Normal   NodeReady                17s                kubelet          Node no-preload-744804 status is now: NodeReady
	
	
	==> dmesg <==
	[ +19.150734] overlayfs: idmapped layers are currently not supported
	[ +11.501017] overlayfs: idmapped layers are currently not supported
	[Oct20 13:03] overlayfs: idmapped layers are currently not supported
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	[Oct20 13:24] overlayfs: idmapped layers are currently not supported
	[Oct20 13:25] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ed97e32daa8558d396105536ea521c661c8619717e7a9e3983fb877ff650e1f2] <==
	{"level":"warn","ts":"2025-10-20T13:24:15.495679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.557020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.592860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.636963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.676477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.700816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.772108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.789757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.824831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.867277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.941108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:15.971218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.021769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.056689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.098794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.108931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.160592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.184515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.240673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.259138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.335894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.342086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.381797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.420861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:24:16.592311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46960","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:25:42 up  3:08,  0 user,  load average: 3.22, 2.86, 2.57
	Linux no-preload-744804 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40baa08f39c149f7abb20e657acee3595f1796ae808231f1e846653ae1ce7717] <==
	I1020 13:25:15.000776       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:25:15.001202       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:25:15.001611       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:25:15.001668       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:25:15.001704       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:25:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:25:15.203901       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:25:15.203932       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:25:15.203942       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:25:15.204232       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1020 13:25:15.404518       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:25:15.404548       1 metrics.go:72] Registering metrics
	I1020 13:25:15.404608       1 controller.go:711] "Syncing nftables rules"
	I1020 13:25:25.202521       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:25:25.202644       1 main.go:301] handling current node
	I1020 13:25:35.203502       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:25:35.203602       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98061f8eebb4548604a72d232022010b948f2d6abc58b93af933e62ec3dc0359] <==
	I1020 13:24:18.207242       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:24:18.207268       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1020 13:24:18.214371       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:24:18.214502       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1020 13:24:18.229303       1 controller.go:667] quota admission added evaluator for: namespaces
	E1020 13:24:18.238466       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1020 13:24:18.374132       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:24:18.536892       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1020 13:24:18.558505       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1020 13:24:18.558528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:24:20.240330       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:24:20.318274       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:24:20.422419       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1020 13:24:20.444141       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1020 13:24:20.448976       1 controller.go:667] quota admission added evaluator for: endpoints
	I1020 13:24:20.457454       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:24:20.895546       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:24:21.660315       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:24:21.688134       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1020 13:24:21.707630       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1020 13:24:26.580074       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:24:26.641303       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1020 13:24:26.831116       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1020 13:24:26.837089       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1020 13:25:40.224752       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:38988: use of closed network connection
	
	
	==> kube-controller-manager [4f2d38b704fdf02d29f7003472a76f84187fa5828f0feab608c38146fd6f4c18] <==
	I1020 13:24:25.924796       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1020 13:24:25.925092       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 13:24:25.925112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1020 13:24:25.925165       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1020 13:24:25.925188       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1020 13:24:25.926108       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1020 13:24:25.926155       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:24:25.936447       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1020 13:24:25.936611       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:24:25.936674       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1020 13:24:25.936853       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1020 13:24:25.936876       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:24:25.937121       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1020 13:24:25.937188       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:24:25.937292       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:24:25.937391       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1020 13:24:25.937432       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1020 13:24:25.945841       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:24:25.945943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:24:25.945952       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:24:25.945958       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:24:25.946155       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:24:25.946219       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-744804"
	I1020 13:24:25.946252       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1020 13:25:25.953759       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [483c9426c44db817c6ec59b05eed78c0482a40b028abe9a5ddbf18bed9a2b561] <==
	I1020 13:24:27.909745       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:24:28.010542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:24:28.111642       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:24:28.111693       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:24:28.111790       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:24:28.233835       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:24:28.233970       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:24:28.259461       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:24:28.260198       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:24:28.260302       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:24:28.262795       1 config.go:200] "Starting service config controller"
	I1020 13:24:28.262868       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:24:28.262913       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:24:28.262955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:24:28.262992       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:24:28.263018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:24:28.263792       1 config.go:309] "Starting node config controller"
	I1020 13:24:28.267796       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:24:28.267933       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:24:28.364492       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:24:28.364529       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:24:28.364575       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [febd69457b29ea9b7c78c9b9fbef8687af58a1ab3c3ef4a504afe5d3226764ec] <==
	E1020 13:24:17.994501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 13:24:17.994655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 13:24:17.994748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 13:24:17.994812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 13:24:17.994858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 13:24:18.812345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1020 13:24:18.879191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1020 13:24:18.893582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1020 13:24:19.008605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1020 13:24:19.049468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1020 13:24:19.091167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1020 13:24:19.127735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1020 13:24:19.142375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1020 13:24:19.196150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1020 13:24:19.248326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1020 13:24:19.394371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1020 13:24:19.435701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1020 13:24:19.443049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1020 13:24:19.479964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1020 13:24:19.495962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1020 13:24:19.495961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1020 13:24:19.524124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1020 13:24:19.546657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1020 13:24:19.568040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1020 13:24:21.120421       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:24:26 no-preload-744804 kubelet[2009]: I1020 13:24:26.756313    2009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmz2k\" (UniqueName: \"kubernetes.io/projected/d65258f0-f2a5-4c71-910b-d148291111ae-kube-api-access-pmz2k\") pod \"kindnet-tqpf7\" (UID: \"d65258f0-f2a5-4c71-910b-d148291111ae\") " pod="kube-system/kindnet-tqpf7"
	Oct 20 13:24:26 no-preload-744804 kubelet[2009]: E1020 13:24:26.876729    2009 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 13:24:26 no-preload-744804 kubelet[2009]: E1020 13:24:26.876923    2009 projected.go:196] Error preparing data for projected volume kube-api-access-pmz2k for pod kube-system/kindnet-tqpf7: configmap "kube-root-ca.crt" not found
	Oct 20 13:24:26 no-preload-744804 kubelet[2009]: E1020 13:24:26.877314    2009 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d65258f0-f2a5-4c71-910b-d148291111ae-kube-api-access-pmz2k podName:d65258f0-f2a5-4c71-910b-d148291111ae nodeName:}" failed. No retries permitted until 2025-10-20 13:24:27.377055977 +0000 UTC m=+5.782147680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmz2k" (UniqueName: "kubernetes.io/projected/d65258f0-f2a5-4c71-910b-d148291111ae-kube-api-access-pmz2k") pod "kindnet-tqpf7" (UID: "d65258f0-f2a5-4c71-910b-d148291111ae") : configmap "kube-root-ca.crt" not found
	Oct 20 13:24:26 no-preload-744804 kubelet[2009]: E1020 13:24:26.877694    2009 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 20 13:24:26 no-preload-744804 kubelet[2009]: E1020 13:24:26.877811    2009 projected.go:196] Error preparing data for projected volume kube-api-access-smz7m for pod kube-system/kube-proxy-bv8x8: configmap "kube-root-ca.crt" not found
	Oct 20 13:24:26 no-preload-744804 kubelet[2009]: E1020 13:24:26.877923    2009 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/835b8b0c-6e21-43be-9656-1e09387eab43-kube-api-access-smz7m podName:835b8b0c-6e21-43be-9656-1e09387eab43 nodeName:}" failed. No retries permitted until 2025-10-20 13:24:27.377907738 +0000 UTC m=+5.782999441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-smz7m" (UniqueName: "kubernetes.io/projected/835b8b0c-6e21-43be-9656-1e09387eab43-kube-api-access-smz7m") pod "kube-proxy-bv8x8" (UID: "835b8b0c-6e21-43be-9656-1e09387eab43") : configmap "kube-root-ca.crt" not found
	Oct 20 13:24:27 no-preload-744804 kubelet[2009]: I1020 13:24:27.465154    2009 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:24:27 no-preload-744804 kubelet[2009]: W1020 13:24:27.656943    2009 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/crio-c9182e292b375c84b2b944375352368629bcd87bca933dffbafa848bdcf986b8 WatchSource:0}: Error finding container c9182e292b375c84b2b944375352368629bcd87bca933dffbafa848bdcf986b8: Status 404 returned error can't find the container with id c9182e292b375c84b2b944375352368629bcd87bca933dffbafa848bdcf986b8
	Oct 20 13:24:28 no-preload-744804 kubelet[2009]: I1020 13:24:28.062894    2009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bv8x8" podStartSLOduration=2.06287824 podStartE2EDuration="2.06287824s" podCreationTimestamp="2025-10-20 13:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:24:28.062710073 +0000 UTC m=+6.467801784" watchObservedRunningTime="2025-10-20 13:24:28.06287824 +0000 UTC m=+6.467969951"
	Oct 20 13:24:58 no-preload-744804 kubelet[2009]: E1020 13:24:58.087794    2009 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" image="docker.io/kindest/kindnetd:v20250512-df8de77b"
	Oct 20 13:24:58 no-preload-744804 kubelet[2009]: E1020 13:24:58.087895    2009 kuberuntime_image.go:43] "Failed to pull image" err="unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" image="docker.io/kindest/kindnetd:v20250512-df8de77b"
	Oct 20 13:24:58 no-preload-744804 kubelet[2009]: E1020 13:24:58.087987    2009 kuberuntime_manager.go:1449] "Unhandled Error" err="container kindnet-cni start failed in pod kindnet-tqpf7_kube-system(d65258f0-f2a5-4c71-910b-d148291111ae): ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image" logger="UnhandledError"
	Oct 20 13:24:58 no-preload-744804 kubelet[2009]: E1020 13:24:58.088024    2009 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image\"" pod="kube-system/kindnet-tqpf7" podUID="d65258f0-f2a5-4c71-910b-d148291111ae"
	Oct 20 13:24:59 no-preload-744804 kubelet[2009]: E1020 13:24:59.080567    2009 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: initializing source docker://kindest/kindnetd:v20250512-df8de77b: reading manifest v20250512-df8de77b in docker.io/kindest/kindnetd: received unexpected HTTP status: 504 Gateway Time-out; artifact err: provided artifact is a container image\"" pod="kube-system/kindnet-tqpf7" podUID="d65258f0-f2a5-4c71-910b-d148291111ae"
	Oct 20 13:25:25 no-preload-744804 kubelet[2009]: I1020 13:25:25.496983    2009 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 20 13:25:25 no-preload-744804 kubelet[2009]: I1020 13:25:25.528140    2009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tqpf7" podStartSLOduration=12.42839313 podStartE2EDuration="59.528119244s" podCreationTimestamp="2025-10-20 13:24:26 +0000 UTC" firstStartedPulling="2025-10-20 13:24:27.721934279 +0000 UTC m=+6.127025990" lastFinishedPulling="2025-10-20 13:25:14.821660401 +0000 UTC m=+53.226752104" observedRunningTime="2025-10-20 13:25:15.16290319 +0000 UTC m=+53.567994901" watchObservedRunningTime="2025-10-20 13:25:25.528119244 +0000 UTC m=+63.933210946"
	Oct 20 13:25:25 no-preload-744804 kubelet[2009]: I1020 13:25:25.645079    2009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc85t\" (UniqueName: \"kubernetes.io/projected/31880320-20a8-4cbe-b5c2-4b1a321c8501-kube-api-access-cc85t\") pod \"storage-provisioner\" (UID: \"31880320-20a8-4cbe-b5c2-4b1a321c8501\") " pod="kube-system/storage-provisioner"
	Oct 20 13:25:25 no-preload-744804 kubelet[2009]: I1020 13:25:25.645282    2009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfe5480f-3c87-4f50-8890-9aeb8740860b-config-volume\") pod \"coredns-66bc5c9577-czxmg\" (UID: \"dfe5480f-3c87-4f50-8890-9aeb8740860b\") " pod="kube-system/coredns-66bc5c9577-czxmg"
	Oct 20 13:25:25 no-preload-744804 kubelet[2009]: I1020 13:25:25.645383    2009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpplj\" (UniqueName: \"kubernetes.io/projected/dfe5480f-3c87-4f50-8890-9aeb8740860b-kube-api-access-kpplj\") pod \"coredns-66bc5c9577-czxmg\" (UID: \"dfe5480f-3c87-4f50-8890-9aeb8740860b\") " pod="kube-system/coredns-66bc5c9577-czxmg"
	Oct 20 13:25:25 no-preload-744804 kubelet[2009]: I1020 13:25:25.645476    2009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/31880320-20a8-4cbe-b5c2-4b1a321c8501-tmp\") pod \"storage-provisioner\" (UID: \"31880320-20a8-4cbe-b5c2-4b1a321c8501\") " pod="kube-system/storage-provisioner"
	Oct 20 13:25:26 no-preload-744804 kubelet[2009]: I1020 13:25:26.224440    2009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-czxmg" podStartSLOduration=59.224413022 podStartE2EDuration="59.224413022s" podCreationTimestamp="2025-10-20 13:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:25:26.217859162 +0000 UTC m=+64.622950890" watchObservedRunningTime="2025-10-20 13:25:26.224413022 +0000 UTC m=+64.629504733"
	Oct 20 13:25:27 no-preload-744804 kubelet[2009]: I1020 13:25:27.194321    2009 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=59.194215098 podStartE2EDuration="59.194215098s" podCreationTimestamp="2025-10-20 13:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-20 13:25:26.242531162 +0000 UTC m=+64.647622873" watchObservedRunningTime="2025-10-20 13:25:27.194215098 +0000 UTC m=+65.599306809"
	Oct 20 13:25:29 no-preload-744804 kubelet[2009]: I1020 13:25:29.175764    2009 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzb4q\" (UniqueName: \"kubernetes.io/projected/751404bb-a4a7-4344-b48b-077e31d184a4-kube-api-access-xzb4q\") pod \"busybox\" (UID: \"751404bb-a4a7-4344-b48b-077e31d184a4\") " pod="default/busybox"
	Oct 20 13:25:29 no-preload-744804 kubelet[2009]: W1020 13:25:29.406980    2009 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/crio-c6d6d89d53b0d85c32133ab5d043f78a8a208364077f8ada8eca78b1f1d28319 WatchSource:0}: Error finding container c6d6d89d53b0d85c32133ab5d043f78a8a208364077f8ada8eca78b1f1d28319: Status 404 returned error can't find the container with id c6d6d89d53b0d85c32133ab5d043f78a8a208364077f8ada8eca78b1f1d28319
	
	
	==> storage-provisioner [d36c5b5b6d1bee3c68eb32aa78713b8f338d3460c3085c445d29909d7719328e] <==
	I1020 13:25:26.130196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:25:26.135615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:26.142886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:25:26.143265       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:25:26.143500       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-744804_c74f3bf5-c0cc-49c4-805c-503c1f7d5389!
	I1020 13:25:26.147106       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"570f9e7e-e0b0-42c3-8be9-6674d6360b18", APIVersion:"v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-744804_c74f3bf5-c0cc-49c4-805c-503c1f7d5389 became leader
	W1020 13:25:26.204119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:26.237296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:25:26.247701       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-744804_c74f3bf5-c0cc-49c4-805c-503c1f7d5389!
	W1020 13:25:28.240749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:28.245954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:30.249138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:30.262660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:32.266827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:32.271990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:34.274463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:34.278593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:36.282242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:36.286972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:38.289658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:38.296969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:40.299797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:40.305413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:42.309007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:25:42.322236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-744804 -n no-preload-744804
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-744804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-744804 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-744804 --alsologtostderr -v=1: exit status 80 (1.927562639s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-744804 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:26:58.037754  512336 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:26:58.037889  512336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:26:58.037900  512336 out.go:374] Setting ErrFile to fd 2...
	I1020 13:26:58.037905  512336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:26:58.038280  512336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:26:58.038579  512336 out.go:368] Setting JSON to false
	I1020 13:26:58.038608  512336 mustload.go:65] Loading cluster: no-preload-744804
	I1020 13:26:58.039325  512336 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:26:58.040529  512336 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:58.058351  512336 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:58.058669  512336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:26:58.131691  512336 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-20 13:26:58.121647106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:26:58.132362  512336 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-744804 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1020 13:26:58.135846  512336 out.go:179] * Pausing node no-preload-744804 ... 
	I1020 13:26:58.138737  512336 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:58.139105  512336 ssh_runner.go:195] Run: systemctl --version
	I1020 13:26:58.139170  512336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:58.157720  512336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:58.263232  512336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:26:58.285353  512336 pause.go:52] kubelet running: true
	I1020 13:26:58.285422  512336 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:26:58.557457  512336 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:26:58.557546  512336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:26:58.626460  512336 cri.go:89] found id: "11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b"
	I1020 13:26:58.626488  512336 cri.go:89] found id: "886a327b32a4bf69e26cd65a10e8e6b11c1d668342dc5a21d9f727e71375f98b"
	I1020 13:26:58.626493  512336 cri.go:89] found id: "31f05cb96945b51652801d40a5cb2c12ac111770818e466dcbcbef7e5df312b3"
	I1020 13:26:58.626497  512336 cri.go:89] found id: "38c23aad4fa887459e239041be46dccc58a99edeb50d18acfa6a539f90c4f00e"
	I1020 13:26:58.626500  512336 cri.go:89] found id: "9a3ab7492a03c361b566afcb199e4bae1397925004be8ee7d219da31312fd02b"
	I1020 13:26:58.626504  512336 cri.go:89] found id: "8f15a98da5f338160fc0802f3aac18ef56c3a8ac8e7f0d8a95b82a15d0cbfba5"
	I1020 13:26:58.626507  512336 cri.go:89] found id: "6e11ee6379c8057195df4b7174497050554e2746585cffbcff5d6ee674caccd2"
	I1020 13:26:58.626510  512336 cri.go:89] found id: "1c3907b84b2719c834370b3a234bfcf74dccb4f164f5f6e62b92590abdba5b57"
	I1020 13:26:58.626514  512336 cri.go:89] found id: "4f36e401d485e4f4d90833026e33ea3530d32bdd15cccc9487bf620da50270af"
	I1020 13:26:58.626522  512336 cri.go:89] found id: "8d27bd77cd846ddebfc1d2a5c08dc83af7d745f6da25670e469bb37556ffcac2"
	I1020 13:26:58.626525  512336 cri.go:89] found id: "cc04991aeb9acd4eb11bb78237f8d40eb0cbc8fd30f87da44618819c62b1650e"
	I1020 13:26:58.626528  512336 cri.go:89] found id: ""
	I1020 13:26:58.626578  512336 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:26:58.646024  512336 retry.go:31] will retry after 230.157396ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:26:58Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:26:58.876512  512336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:26:58.889501  512336 pause.go:52] kubelet running: false
	I1020 13:26:58.889592  512336 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:26:59.073214  512336 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:26:59.073347  512336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:26:59.152743  512336 cri.go:89] found id: "11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b"
	I1020 13:26:59.152815  512336 cri.go:89] found id: "886a327b32a4bf69e26cd65a10e8e6b11c1d668342dc5a21d9f727e71375f98b"
	I1020 13:26:59.152833  512336 cri.go:89] found id: "31f05cb96945b51652801d40a5cb2c12ac111770818e466dcbcbef7e5df312b3"
	I1020 13:26:59.152851  512336 cri.go:89] found id: "38c23aad4fa887459e239041be46dccc58a99edeb50d18acfa6a539f90c4f00e"
	I1020 13:26:59.152889  512336 cri.go:89] found id: "9a3ab7492a03c361b566afcb199e4bae1397925004be8ee7d219da31312fd02b"
	I1020 13:26:59.152912  512336 cri.go:89] found id: "8f15a98da5f338160fc0802f3aac18ef56c3a8ac8e7f0d8a95b82a15d0cbfba5"
	I1020 13:26:59.152931  512336 cri.go:89] found id: "6e11ee6379c8057195df4b7174497050554e2746585cffbcff5d6ee674caccd2"
	I1020 13:26:59.152951  512336 cri.go:89] found id: "1c3907b84b2719c834370b3a234bfcf74dccb4f164f5f6e62b92590abdba5b57"
	I1020 13:26:59.152986  512336 cri.go:89] found id: "4f36e401d485e4f4d90833026e33ea3530d32bdd15cccc9487bf620da50270af"
	I1020 13:26:59.153006  512336 cri.go:89] found id: "8d27bd77cd846ddebfc1d2a5c08dc83af7d745f6da25670e469bb37556ffcac2"
	I1020 13:26:59.153026  512336 cri.go:89] found id: "cc04991aeb9acd4eb11bb78237f8d40eb0cbc8fd30f87da44618819c62b1650e"
	I1020 13:26:59.153056  512336 cri.go:89] found id: ""
	I1020 13:26:59.153150  512336 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:26:59.164654  512336 retry.go:31] will retry after 445.791066ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:26:59Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:26:59.611442  512336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:26:59.625002  512336 pause.go:52] kubelet running: false
	I1020 13:26:59.625101  512336 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1020 13:26:59.812339  512336 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1020 13:26:59.812554  512336 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1020 13:26:59.877848  512336 cri.go:89] found id: "11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b"
	I1020 13:26:59.877872  512336 cri.go:89] found id: "886a327b32a4bf69e26cd65a10e8e6b11c1d668342dc5a21d9f727e71375f98b"
	I1020 13:26:59.877877  512336 cri.go:89] found id: "31f05cb96945b51652801d40a5cb2c12ac111770818e466dcbcbef7e5df312b3"
	I1020 13:26:59.877881  512336 cri.go:89] found id: "38c23aad4fa887459e239041be46dccc58a99edeb50d18acfa6a539f90c4f00e"
	I1020 13:26:59.877884  512336 cri.go:89] found id: "9a3ab7492a03c361b566afcb199e4bae1397925004be8ee7d219da31312fd02b"
	I1020 13:26:59.877887  512336 cri.go:89] found id: "8f15a98da5f338160fc0802f3aac18ef56c3a8ac8e7f0d8a95b82a15d0cbfba5"
	I1020 13:26:59.877890  512336 cri.go:89] found id: "6e11ee6379c8057195df4b7174497050554e2746585cffbcff5d6ee674caccd2"
	I1020 13:26:59.877894  512336 cri.go:89] found id: "1c3907b84b2719c834370b3a234bfcf74dccb4f164f5f6e62b92590abdba5b57"
	I1020 13:26:59.877896  512336 cri.go:89] found id: "4f36e401d485e4f4d90833026e33ea3530d32bdd15cccc9487bf620da50270af"
	I1020 13:26:59.877907  512336 cri.go:89] found id: "8d27bd77cd846ddebfc1d2a5c08dc83af7d745f6da25670e469bb37556ffcac2"
	I1020 13:26:59.877913  512336 cri.go:89] found id: "cc04991aeb9acd4eb11bb78237f8d40eb0cbc8fd30f87da44618819c62b1650e"
	I1020 13:26:59.877916  512336 cri.go:89] found id: ""
	I1020 13:26:59.877964  512336 ssh_runner.go:195] Run: sudo runc list -f json
	I1020 13:26:59.893644  512336 out.go:203] 
	W1020 13:26:59.896646  512336 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:26:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:26:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1020 13:26:59.896668  512336 out.go:285] * 
	* 
	W1020 13:26:59.903663  512336 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1020 13:26:59.906563  512336 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-744804 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-744804
helpers_test.go:243: (dbg) docker inspect no-preload-744804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41",
	        "Created": "2025-10-20T13:23:35.394425539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:25:56.407662694Z",
	            "FinishedAt": "2025-10-20T13:25:55.229292782Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/hostname",
	        "HostsPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/hosts",
	        "LogPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41-json.log",
	        "Name": "/no-preload-744804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-744804:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-744804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41",
	                "LowerDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-744804",
	                "Source": "/var/lib/docker/volumes/no-preload-744804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-744804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-744804",
	                "name.minikube.sigs.k8s.io": "no-preload-744804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce55b2cab51cff8c423d4ede4796543b2fbfda1944eaeac257f8855be870e989",
	            "SandboxKey": "/var/run/docker/netns/ce55b2cab51c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-744804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:b7:74:9e:92:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "307dee052f6f076bff152f38e429e93b9787d013b30129b59f6e7b891323decf",
	                    "EndpointID": "b21dfe8d4bad29545064b73f4c4e4313da88156ec8ee1252a07990cdf1676c70",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-744804",
	                        "7c7d00bb470e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804: exit status 2 (659.886504ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-744804 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-744804 logs -n 25: (2.189142217s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ stop    │ -p newest-cni-018730 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-018730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ newest-cni-018730 image list --format=json                                                                                                                                                                                                    │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ pause   │ -p newest-cni-018730 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	│ delete  │ -p newest-cni-018730                                                                                                                                                                                                                          │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ delete  │ -p newest-cni-018730                                                                                                                                                                                                                          │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p auto-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-308474                  │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-744804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	│ stop    │ -p no-preload-744804 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ addons  │ enable dashboard -p no-preload-744804 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:26 UTC │
	│ ssh     │ -p auto-308474 pgrep -a kubelet                                                                                                                                                                                                               │ auto-308474                  │ jenkins │ v1.37.0 │ 20 Oct 25 13:26 UTC │ 20 Oct 25 13:26 UTC │
	│ image   │ no-preload-744804 image list --format=json                                                                                                                                                                                                    │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:26 UTC │ 20 Oct 25 13:26 UTC │
	│ pause   │ -p no-preload-744804 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:25:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:25:56.011636  509360 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:25:56.012318  509360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:56.012359  509360 out.go:374] Setting ErrFile to fd 2...
	I1020 13:25:56.013984  509360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:56.014342  509360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:25:56.014866  509360 out.go:368] Setting JSON to false
	I1020 13:25:56.015985  509360 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11306,"bootTime":1760955450,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:25:56.016092  509360 start.go:141] virtualization:  
	I1020 13:25:56.019296  509360 out.go:179] * [no-preload-744804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:25:56.023201  509360 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:25:56.023278  509360 notify.go:220] Checking for updates...
	I1020 13:25:56.029292  509360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:25:56.032090  509360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:56.034964  509360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:25:56.037864  509360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:25:56.040767  509360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:25:56.044229  509360 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:56.044883  509360 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:25:56.071290  509360 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:25:56.071418  509360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:56.169397  509360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:25:56.155087483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:56.169494  509360 docker.go:318] overlay module found
	I1020 13:25:56.172864  509360 out.go:179] * Using the docker driver based on existing profile
	I1020 13:25:56.175829  509360 start.go:305] selected driver: docker
	I1020 13:25:56.175850  509360 start.go:925] validating driver "docker" against &{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:56.175956  509360 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:25:56.176650  509360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:56.293743  509360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:25:56.278849902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:56.294087  509360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:25:56.294114  509360 cni.go:84] Creating CNI manager for ""
	I1020 13:25:56.294169  509360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:25:56.294211  509360 start.go:349] cluster config:
	{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:56.297485  509360 out.go:179] * Starting "no-preload-744804" primary control-plane node in "no-preload-744804" cluster
	I1020 13:25:56.300454  509360 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:25:56.303322  509360 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:25:56.306237  509360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:56.306388  509360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:25:56.306713  509360 cache.go:107] acquiring lock: {Name:mk2466d3c957a995adbebbabeab0fa3cc60b0749 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.306800  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1020 13:25:56.306808  509360 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.741µs
	I1020 13:25:56.306816  509360 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1020 13:25:56.306827  509360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:25:56.307008  509360 cache.go:107] acquiring lock: {Name:mk2f501eec0d7af6312aef6efa1f5bbad5f4d684 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307055  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1020 13:25:56.307061  509360 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 58.175µs
	I1020 13:25:56.307068  509360 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1020 13:25:56.307078  509360 cache.go:107] acquiring lock: {Name:mk91e48e01c9d742f280bc2f9044086cb15ac8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307112  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1020 13:25:56.307117  509360 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 40.238µs
	I1020 13:25:56.307122  509360 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1020 13:25:56.307131  509360 cache.go:107] acquiring lock: {Name:mk06b7edc57ee881bc4af5e7d1c0bb5270ebff49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307162  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1020 13:25:56.307166  509360 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 36.004µs
	I1020 13:25:56.307174  509360 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1020 13:25:56.307183  509360 cache.go:107] acquiring lock: {Name:mk1d0a9075d8d12111d126a101053db6ac0a7b69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307214  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1020 13:25:56.307218  509360 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 36.776µs
	I1020 13:25:56.307225  509360 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1020 13:25:56.307235  509360 cache.go:107] acquiring lock: {Name:mkd8eb3de224a6da14efa26f40075e815e71b6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307263  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1020 13:25:56.307268  509360 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.756µs
	I1020 13:25:56.307273  509360 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1020 13:25:56.307288  509360 cache.go:107] acquiring lock: {Name:mk76c9e0dd61216d0c0ba53e6cfb9cbe19ddfd70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307315  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1020 13:25:56.307320  509360 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.451µs
	I1020 13:25:56.307326  509360 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1020 13:25:56.307335  509360 cache.go:107] acquiring lock: {Name:mkf695cbf431ff83306d5e1211f07fc194d769c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307360  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1020 13:25:56.307371  509360 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.933µs
	I1020 13:25:56.307377  509360 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1020 13:25:56.307383  509360 cache.go:87] Successfully saved all images to host disk.
	I1020 13:25:56.337199  509360 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:25:56.337219  509360 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:25:56.337232  509360 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:25:56.337254  509360 start.go:360] acquireMachinesLock for no-preload-744804: {Name:mk60261f5e12334720a2e0b8e33ce6265dbb09b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.337310  509360 start.go:364] duration metric: took 35.233µs to acquireMachinesLock for "no-preload-744804"
	I1020 13:25:56.337337  509360 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:25:56.337346  509360 fix.go:54] fixHost starting: 
	I1020 13:25:56.337590  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:25:56.359879  509360 fix.go:112] recreateIfNeeded on no-preload-744804: state=Stopped err=<nil>
	W1020 13:25:56.359915  509360 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 13:25:55.772525  506566 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.14659572s
	I1020 13:25:58.189668  506566 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.56676406s
	I1020 13:25:58.626266  506566 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.003290784s
	I1020 13:25:58.647080  506566 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 13:25:58.663387  506566 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 13:25:58.683497  506566 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 13:25:58.683978  506566 kubeadm.go:318] [mark-control-plane] Marking the node auto-308474 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 13:25:58.699134  506566 kubeadm.go:318] [bootstrap-token] Using token: rifdlr.uoz69o4zgmb29avx
	I1020 13:25:58.702126  506566 out.go:252]   - Configuring RBAC rules ...
	I1020 13:25:58.702281  506566 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 13:25:58.708578  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 13:25:58.717677  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 13:25:58.721787  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 13:25:58.726185  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 13:25:58.730501  506566 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 13:25:59.033644  506566 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 13:25:59.493714  506566 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 13:26:00.135533  506566 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 13:26:00.135569  506566 kubeadm.go:318] 
	I1020 13:26:00.135656  506566 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 13:26:00.135668  506566 kubeadm.go:318] 
	I1020 13:26:00.135756  506566 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 13:26:00.135761  506566 kubeadm.go:318] 
	I1020 13:26:00.148504  506566 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 13:26:00.148613  506566 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 13:26:00.148669  506566 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 13:26:00.148675  506566 kubeadm.go:318] 
	I1020 13:26:00.148733  506566 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 13:26:00.148757  506566 kubeadm.go:318] 
	I1020 13:26:00.148809  506566 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 13:26:00.148817  506566 kubeadm.go:318] 
	I1020 13:26:00.148872  506566 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 13:26:00.148952  506566 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 13:26:00.149023  506566 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 13:26:00.149030  506566 kubeadm.go:318] 
	I1020 13:26:00.149120  506566 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 13:26:00.149201  506566 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 13:26:00.149206  506566 kubeadm.go:318] 
	I1020 13:26:00.149296  506566 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token rifdlr.uoz69o4zgmb29avx \
	I1020 13:26:00.149404  506566 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 \
	I1020 13:26:00.149427  506566 kubeadm.go:318] 	--control-plane 
	I1020 13:26:00.149431  506566 kubeadm.go:318] 
	I1020 13:26:00.149521  506566 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 13:26:00.149526  506566 kubeadm.go:318] 
	I1020 13:26:00.149612  506566 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token rifdlr.uoz69o4zgmb29avx \
	I1020 13:26:00.149720  506566 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 
	I1020 13:26:00.169890  506566 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1020 13:26:00.170140  506566 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 13:26:00.170262  506566 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 13:26:00.170294  506566 cni.go:84] Creating CNI manager for ""
	I1020 13:26:00.170315  506566 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:26:00.173507  506566 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 13:25:56.363118  509360 out.go:252] * Restarting existing docker container for "no-preload-744804" ...
	I1020 13:25:56.363198  509360 cli_runner.go:164] Run: docker start no-preload-744804
	I1020 13:25:56.734348  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:25:56.758968  509360 kic.go:430] container "no-preload-744804" state is running.
	I1020 13:25:56.759368  509360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:25:56.786036  509360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:25:56.786271  509360 machine.go:93] provisionDockerMachine start ...
	I1020 13:25:56.786326  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:25:56.813992  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:56.814325  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:25:56.814334  509360 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:25:56.815168  509360 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54772->127.0.0.1:33468: read: connection reset by peer
	I1020 13:25:59.964048  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:25:59.964134  509360 ubuntu.go:182] provisioning hostname "no-preload-744804"
	I1020 13:25:59.964231  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:25:59.982246  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:59.982549  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:25:59.982565  509360 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744804 && echo "no-preload-744804" | sudo tee /etc/hostname
	I1020 13:26:00.345250  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:26:00.345347  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:00.370643  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:26:00.371000  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:26:00.371034  509360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744804/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:26:00.557214  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:26:00.557243  509360 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:26:00.557266  509360 ubuntu.go:190] setting up certificates
	I1020 13:26:00.557276  509360 provision.go:84] configureAuth start
	I1020 13:26:00.557337  509360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:26:00.589850  509360 provision.go:143] copyHostCerts
	I1020 13:26:00.589932  509360 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:26:00.589958  509360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:26:00.590041  509360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:26:00.590152  509360 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:26:00.590162  509360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:26:00.590191  509360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:26:00.590253  509360 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:26:00.590264  509360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:26:00.590298  509360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:26:00.590366  509360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.no-preload-744804 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-744804]
	I1020 13:26:00.965770  509360 provision.go:177] copyRemoteCerts
	I1020 13:26:00.965847  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:26:00.965899  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:00.986666  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:00.176514  506566 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 13:26:00.212180  506566 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 13:26:00.212211  506566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 13:26:00.322114  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 13:26:00.875621  506566 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 13:26:00.875757  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:00.875837  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-308474 minikube.k8s.io/updated_at=2025_10_20T13_26_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=auto-308474 minikube.k8s.io/primary=true
	I1020 13:26:00.908118  506566 ops.go:34] apiserver oom_adj: -16
	I1020 13:26:01.112775  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:01.094481  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:26:01.119812  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:26:01.146357  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:26:01.169932  509360 provision.go:87] duration metric: took 612.629829ms to configureAuth
	I1020 13:26:01.169961  509360 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:26:01.170158  509360 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:26:01.170284  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.190220  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:26:01.190542  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:26:01.190565  509360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:26:01.547120  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:26:01.547144  509360 machine.go:96] duration metric: took 4.760864246s to provisionDockerMachine
	I1020 13:26:01.547156  509360 start.go:293] postStartSetup for "no-preload-744804" (driver="docker")
	I1020 13:26:01.547167  509360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:26:01.547224  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:26:01.547261  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.577174  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:01.688993  509360 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:26:01.693044  509360 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:26:01.693071  509360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:26:01.693082  509360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:26:01.693139  509360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:26:01.693214  509360 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:26:01.693311  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:26:01.702124  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:26:01.721940  509360 start.go:296] duration metric: took 174.769245ms for postStartSetup
	I1020 13:26:01.722020  509360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:26:01.722061  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.739094  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:01.841537  509360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:26:01.846292  509360 fix.go:56] duration metric: took 5.508938955s for fixHost
	I1020 13:26:01.846318  509360 start.go:83] releasing machines lock for "no-preload-744804", held for 5.508994308s
	I1020 13:26:01.846389  509360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:26:01.864646  509360 ssh_runner.go:195] Run: cat /version.json
	I1020 13:26:01.864716  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.864981  509360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:26:01.865031  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.886809  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:01.898222  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:02.094351  509360 ssh_runner.go:195] Run: systemctl --version
	I1020 13:26:02.101254  509360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:26:02.151814  509360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:26:02.158831  509360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:26:02.159078  509360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:26:02.168389  509360 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:26:02.168415  509360 start.go:495] detecting cgroup driver to use...
	I1020 13:26:02.168478  509360 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:26:02.168570  509360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:26:02.185562  509360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:26:02.203363  509360 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:26:02.203468  509360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:26:02.223031  509360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:26:02.243032  509360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:26:02.381801  509360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:26:02.510104  509360 docker.go:234] disabling docker service ...
	I1020 13:26:02.510181  509360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:26:02.525569  509360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:26:02.538798  509360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:26:02.696607  509360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:26:02.858499  509360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:26:02.873757  509360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:26:02.888259  509360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:26:02.888336  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.898371  509360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:26:02.898446  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.908797  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.917961  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.927439  509360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:26:02.935928  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.944871  509360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.954562  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.964022  509360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:26:02.971746  509360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:26:02.979320  509360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:03.103739  509360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:26:03.263184  509360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:26:03.263254  509360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:26:03.267219  509360 start.go:563] Will wait 60s for crictl version
	I1020 13:26:03.267283  509360 ssh_runner.go:195] Run: which crictl
	I1020 13:26:03.270760  509360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:26:03.303826  509360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:26:03.303909  509360 ssh_runner.go:195] Run: crio --version
	I1020 13:26:03.342303  509360 ssh_runner.go:195] Run: crio --version
	I1020 13:26:03.380191  509360 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:26:01.614184  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:02.113591  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:02.613628  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:03.112981  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:03.613585  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:04.112871  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:04.613448  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:04.817250  506566 kubeadm.go:1113] duration metric: took 3.941536772s to wait for elevateKubeSystemPrivileges
	I1020 13:26:04.817279  506566 kubeadm.go:402] duration metric: took 23.051102722s to StartCluster
	I1020 13:26:04.817296  506566 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:04.817353  506566 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:26:04.817998  506566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:04.818211  506566 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:26:04.818369  506566 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 13:26:04.818629  506566 config.go:182] Loaded profile config "auto-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:26:04.818665  506566 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:26:04.818732  506566 addons.go:69] Setting storage-provisioner=true in profile "auto-308474"
	I1020 13:26:04.818747  506566 addons.go:238] Setting addon storage-provisioner=true in "auto-308474"
	I1020 13:26:04.818770  506566 host.go:66] Checking if "auto-308474" exists ...
	I1020 13:26:04.819306  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:26:04.819800  506566 addons.go:69] Setting default-storageclass=true in profile "auto-308474"
	I1020 13:26:04.819825  506566 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-308474"
	I1020 13:26:04.820130  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:26:04.825315  506566 out.go:179] * Verifying Kubernetes components...
	I1020 13:26:04.828585  506566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:04.864779  506566 addons.go:238] Setting addon default-storageclass=true in "auto-308474"
	I1020 13:26:04.864821  506566 host.go:66] Checking if "auto-308474" exists ...
	I1020 13:26:04.865239  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:26:04.874373  506566 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:26:03.382951  509360 cli_runner.go:164] Run: docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:26:03.402466  509360 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:26:03.407469  509360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:26:03.418070  509360 kubeadm.go:883] updating cluster {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:26:03.418179  509360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:26:03.418229  509360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:26:03.455769  509360 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:26:03.455797  509360 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:26:03.455806  509360 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 13:26:03.455900  509360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-744804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:26:03.455991  509360 ssh_runner.go:195] Run: crio config
	I1020 13:26:03.525952  509360 cni.go:84] Creating CNI manager for ""
	I1020 13:26:03.525982  509360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:26:03.526039  509360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:26:03.526081  509360 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744804 NodeName:no-preload-744804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:26:03.526245  509360 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:26:03.526335  509360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:26:03.534664  509360 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:26:03.534785  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:26:03.542924  509360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 13:26:03.562521  509360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:26:03.576879  509360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1020 13:26:03.590357  509360 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:26:03.594011  509360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:26:03.605663  509360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:03.756200  509360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:26:03.774171  509360 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804 for IP: 192.168.76.2
	I1020 13:26:03.774208  509360 certs.go:195] generating shared ca certs ...
	I1020 13:26:03.774253  509360 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:03.774425  509360 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:26:03.774497  509360 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:26:03.774513  509360 certs.go:257] generating profile certs ...
	I1020 13:26:03.774617  509360 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key
	I1020 13:26:03.774718  509360 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a
	I1020 13:26:03.774839  509360 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key
	I1020 13:26:03.774996  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:26:03.775053  509360 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:26:03.775065  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:26:03.775091  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:26:03.775135  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:26:03.775166  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:26:03.775236  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:26:03.777376  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:26:03.798655  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:26:03.823043  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:26:03.844434  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:26:03.874933  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 13:26:03.898335  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:26:03.922563  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:26:03.959069  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:26:04.025796  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:26:04.057437  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:26:04.085816  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:26:04.122873  509360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:26:04.137997  509360 ssh_runner.go:195] Run: openssl version
	I1020 13:26:04.145226  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:26:04.154671  509360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:26:04.162534  509360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:26:04.162652  509360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:26:04.209332  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:26:04.217691  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:26:04.226417  509360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:26:04.232517  509360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:26:04.232635  509360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:26:04.275187  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:26:04.283347  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:26:04.294010  509360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:26:04.298587  509360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:26:04.298707  509360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:26:04.341912  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:26:04.350282  509360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:26:04.354968  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:26:04.417750  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:26:04.511265  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:26:04.624827  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:26:04.844748  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:26:05.007389  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:26:05.169071  509360 kubeadm.go:400] StartCluster: {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:26:05.169174  509360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:26:05.169246  509360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:26:05.241879  509360 cri.go:89] found id: "8f15a98da5f338160fc0802f3aac18ef56c3a8ac8e7f0d8a95b82a15d0cbfba5"
	I1020 13:26:05.241904  509360 cri.go:89] found id: "6e11ee6379c8057195df4b7174497050554e2746585cffbcff5d6ee674caccd2"
	I1020 13:26:05.241910  509360 cri.go:89] found id: "1c3907b84b2719c834370b3a234bfcf74dccb4f164f5f6e62b92590abdba5b57"
	I1020 13:26:05.241914  509360 cri.go:89] found id: "4f36e401d485e4f4d90833026e33ea3530d32bdd15cccc9487bf620da50270af"
	I1020 13:26:05.241917  509360 cri.go:89] found id: ""
	I1020 13:26:05.241969  509360 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:26:05.266139  509360 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:26:05Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:26:05.266235  509360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:26:05.282875  509360 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:26:05.282898  509360 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:26:05.282971  509360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:26:05.304820  509360 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:26:05.305438  509360 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-744804" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:26:05.305690  509360 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-744804" cluster setting kubeconfig missing "no-preload-744804" context setting]
	I1020 13:26:05.306182  509360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:05.307579  509360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:26:05.333651  509360 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1020 13:26:05.333687  509360 kubeadm.go:601] duration metric: took 50.78306ms to restartPrimaryControlPlane
	I1020 13:26:05.333697  509360 kubeadm.go:402] duration metric: took 164.637418ms to StartCluster
	I1020 13:26:05.333720  509360 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:05.333801  509360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:26:05.334811  509360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:05.335052  509360 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:26:05.335452  509360 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:26:05.335576  509360 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744804"
	I1020 13:26:05.335592  509360 addons.go:238] Setting addon storage-provisioner=true in "no-preload-744804"
	W1020 13:26:05.335598  509360 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:26:05.335622  509360 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:05.335637  509360 addons.go:69] Setting dashboard=true in profile "no-preload-744804"
	I1020 13:26:05.335654  509360 addons.go:238] Setting addon dashboard=true in "no-preload-744804"
	W1020 13:26:05.335660  509360 addons.go:247] addon dashboard should already be in state true
	I1020 13:26:05.335681  509360 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:05.336082  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.336208  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.336574  509360 addons.go:69] Setting default-storageclass=true in profile "no-preload-744804"
	I1020 13:26:05.336600  509360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744804"
	I1020 13:26:05.336879  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.335508  509360 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:26:05.341132  509360 out.go:179] * Verifying Kubernetes components...
	I1020 13:26:05.344814  509360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:05.394881  509360 addons.go:238] Setting addon default-storageclass=true in "no-preload-744804"
	W1020 13:26:05.394899  509360 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:26:05.394923  509360 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:05.395335  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.397623  509360 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:26:05.400619  509360 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:26:05.403544  509360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:05.403566  509360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:26:05.403629  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:05.407006  509360 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 13:26:05.412451  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:26:05.412482  509360 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:26:05.412582  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:05.448637  509360 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:05.448657  509360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:26:05.448719  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:05.451939  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:05.477250  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:05.488686  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:05.858631  509360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:05.957772  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:26:05.957839  509360 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:26:05.988006  509360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:06.000801  509360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:26:04.879439  506566 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:04.879480  506566 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:26:04.879547  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:26:04.910856  506566 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:04.910876  506566 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:26:04.910941  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:26:04.925762  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:26:04.942261  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:26:05.420684  506566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:26:05.509176  506566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:05.615683  506566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:05.793377  506566 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 13:26:05.794300  506566 node_ready.go:35] waiting up to 15m0s for node "auto-308474" to be "Ready" ...
	I1020 13:26:07.267783  506566 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.652060265s)
	I1020 13:26:07.268010  506566 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.474598979s)
	I1020 13:26:07.268032  506566 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1020 13:26:07.270928  506566 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1020 13:26:06.122888  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:26:06.122962  509360 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:26:06.235466  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:26:06.235542  509360 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:26:06.320072  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:26:06.320143  509360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:26:06.361287  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:26:06.361359  509360 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:26:06.390727  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:26:06.390798  509360 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:26:06.422072  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:26:06.422146  509360 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:26:06.507162  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:26:06.507233  509360 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:26:06.545636  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:26:06.545705  509360 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:26:06.571037  509360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:26:07.273997  506566 addons.go:514] duration metric: took 2.455311566s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1020 13:26:07.772513  506566 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-308474" context rescaled to 1 replicas
	W1020 13:26:07.797620  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:09.798106  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	I1020 13:26:13.082994  509360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.09490748s)
	I1020 13:26:13.083080  509360 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.082184471s)
	I1020 13:26:13.083311  509360 node_ready.go:35] waiting up to 6m0s for node "no-preload-744804" to be "Ready" ...
	I1020 13:26:13.083188  509360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.512070824s)
	I1020 13:26:13.084567  509360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.225855038s)
	I1020 13:26:13.087629  509360 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-744804 addons enable metrics-server
	
	I1020 13:26:13.125283  509360 node_ready.go:49] node "no-preload-744804" is "Ready"
	I1020 13:26:13.125367  509360 node_ready.go:38] duration metric: took 42.042571ms for node "no-preload-744804" to be "Ready" ...
	I1020 13:26:13.125395  509360 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:26:13.125490  509360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:26:13.162119  509360 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1020 13:26:13.165046  509360 addons.go:514] duration metric: took 7.8295782s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 13:26:13.174043  509360 api_server.go:72] duration metric: took 7.838934116s to wait for apiserver process to appear ...
	I1020 13:26:13.174131  509360 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:26:13.174174  509360 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:26:13.184515  509360 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:26:13.184599  509360 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:26:13.675278  509360 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:26:13.684406  509360 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:26:13.685629  509360 api_server.go:141] control plane version: v1.34.1
	I1020 13:26:13.685655  509360 api_server.go:131] duration metric: took 511.504558ms to wait for apiserver health ...
	I1020 13:26:13.685665  509360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:26:13.689426  509360 system_pods.go:59] 8 kube-system pods found
	I1020 13:26:13.689466  509360 system_pods.go:61] "coredns-66bc5c9577-czxmg" [dfe5480f-3c87-4f50-8890-9aeb8740860b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:13.689476  509360 system_pods.go:61] "etcd-no-preload-744804" [861cd06e-ae97-40a2-94f3-c36f118ae148] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:26:13.689520  509360 system_pods.go:61] "kindnet-tqpf7" [d65258f0-f2a5-4c71-910b-d148291111ae] Running
	I1020 13:26:13.689528  509360 system_pods.go:61] "kube-apiserver-no-preload-744804" [5045b24e-f1ef-4e65-938c-3999ea03c565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:26:13.689535  509360 system_pods.go:61] "kube-controller-manager-no-preload-744804" [f842efbf-e39d-4c96-b2d2-14918e2a33a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:26:13.689545  509360 system_pods.go:61] "kube-proxy-bv8x8" [835b8b0c-6e21-43be-9656-1e09387eab43] Running
	I1020 13:26:13.689552  509360 system_pods.go:61] "kube-scheduler-no-preload-744804" [469f86bf-dc90-42fe-9d33-901b8c97aabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:26:13.689556  509360 system_pods.go:61] "storage-provisioner" [31880320-20a8-4cbe-b5c2-4b1a321c8501] Running
	I1020 13:26:13.689579  509360 system_pods.go:74] duration metric: took 3.907843ms to wait for pod list to return data ...
	I1020 13:26:13.689594  509360 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:26:13.692142  509360 default_sa.go:45] found service account: "default"
	I1020 13:26:13.692167  509360 default_sa.go:55] duration metric: took 2.565966ms for default service account to be created ...
	I1020 13:26:13.692177  509360 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:26:13.695010  509360 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:13.695044  509360 system_pods.go:89] "coredns-66bc5c9577-czxmg" [dfe5480f-3c87-4f50-8890-9aeb8740860b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:13.695054  509360 system_pods.go:89] "etcd-no-preload-744804" [861cd06e-ae97-40a2-94f3-c36f118ae148] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:26:13.695085  509360 system_pods.go:89] "kindnet-tqpf7" [d65258f0-f2a5-4c71-910b-d148291111ae] Running
	I1020 13:26:13.695093  509360 system_pods.go:89] "kube-apiserver-no-preload-744804" [5045b24e-f1ef-4e65-938c-3999ea03c565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:26:13.695107  509360 system_pods.go:89] "kube-controller-manager-no-preload-744804" [f842efbf-e39d-4c96-b2d2-14918e2a33a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:26:13.695113  509360 system_pods.go:89] "kube-proxy-bv8x8" [835b8b0c-6e21-43be-9656-1e09387eab43] Running
	I1020 13:26:13.695123  509360 system_pods.go:89] "kube-scheduler-no-preload-744804" [469f86bf-dc90-42fe-9d33-901b8c97aabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:26:13.695150  509360 system_pods.go:89] "storage-provisioner" [31880320-20a8-4cbe-b5c2-4b1a321c8501] Running
	I1020 13:26:13.695158  509360 system_pods.go:126] duration metric: took 2.975473ms to wait for k8s-apps to be running ...
	I1020 13:26:13.695166  509360 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:26:13.695222  509360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:26:13.712257  509360 system_svc.go:56] duration metric: took 17.082049ms WaitForService to wait for kubelet
	I1020 13:26:13.712326  509360 kubeadm.go:586] duration metric: took 8.377232821s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:26:13.712470  509360 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:26:13.715579  509360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:26:13.715611  509360 node_conditions.go:123] node cpu capacity is 2
	I1020 13:26:13.715633  509360 node_conditions.go:105] duration metric: took 3.149548ms to run NodePressure ...
	I1020 13:26:13.715660  509360 start.go:241] waiting for startup goroutines ...
	I1020 13:26:13.715676  509360 start.go:246] waiting for cluster config update ...
	I1020 13:26:13.715687  509360 start.go:255] writing updated cluster config ...
	I1020 13:26:13.715995  509360 ssh_runner.go:195] Run: rm -f paused
	I1020 13:26:13.720268  509360 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:13.723896  509360 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czxmg" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 13:26:15.729519  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:11.798537  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:14.297265  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:17.730066  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:19.730855  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:16.797451  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:18.797547  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:21.298299  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:22.230564  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:24.732349  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:23.797534  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:25.797657  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:27.229557  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:29.230355  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:27.797921  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:30.298267  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:31.729737  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:34.230242  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:32.797268  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:34.797341  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:36.729257  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:38.729839  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:36.801923  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:39.297153  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:41.297259  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:41.229827  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:43.233887  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	I1020 13:26:44.730007  509360 pod_ready.go:94] pod "coredns-66bc5c9577-czxmg" is "Ready"
	I1020 13:26:44.730034  509360 pod_ready.go:86] duration metric: took 31.006109862s for pod "coredns-66bc5c9577-czxmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.733072  509360 pod_ready.go:83] waiting for pod "etcd-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.738152  509360 pod_ready.go:94] pod "etcd-no-preload-744804" is "Ready"
	I1020 13:26:44.738233  509360 pod_ready.go:86] duration metric: took 5.134781ms for pod "etcd-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.741443  509360 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.746704  509360 pod_ready.go:94] pod "kube-apiserver-no-preload-744804" is "Ready"
	I1020 13:26:44.746735  509360 pod_ready.go:86] duration metric: took 5.25909ms for pod "kube-apiserver-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.749173  509360 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.927983  509360 pod_ready.go:94] pod "kube-controller-manager-no-preload-744804" is "Ready"
	I1020 13:26:44.928012  509360 pod_ready.go:86] duration metric: took 178.813474ms for pod "kube-controller-manager-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:45.131271  509360 pod_ready.go:83] waiting for pod "kube-proxy-bv8x8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:45.527950  509360 pod_ready.go:94] pod "kube-proxy-bv8x8" is "Ready"
	I1020 13:26:45.527980  509360 pod_ready.go:86] duration metric: took 396.676644ms for pod "kube-proxy-bv8x8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:45.728068  509360 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:46.128382  509360 pod_ready.go:94] pod "kube-scheduler-no-preload-744804" is "Ready"
	I1020 13:26:46.128451  509360 pod_ready.go:86] duration metric: took 400.353993ms for pod "kube-scheduler-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:46.128471  509360 pod_ready.go:40] duration metric: took 32.408134608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:46.180351  509360 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:26:46.183326  509360 out.go:179] * Done! kubectl is now configured to use "no-preload-744804" cluster and "default" namespace by default
	W1020 13:26:43.297661  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:45.298471  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	I1020 13:26:47.802167  506566 node_ready.go:49] node "auto-308474" is "Ready"
	I1020 13:26:47.802192  506566 node_ready.go:38] duration metric: took 42.007858391s for node "auto-308474" to be "Ready" ...
	I1020 13:26:47.802204  506566 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:26:47.802280  506566 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:26:47.822891  506566 api_server.go:72] duration metric: took 43.004648108s to wait for apiserver process to appear ...
	I1020 13:26:47.822914  506566 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:26:47.822935  506566 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:26:47.837008  506566 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:26:47.839393  506566 api_server.go:141] control plane version: v1.34.1
	I1020 13:26:47.839428  506566 api_server.go:131] duration metric: took 16.506805ms to wait for apiserver health ...
	I1020 13:26:47.839438  506566 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:26:47.843525  506566 system_pods.go:59] 8 kube-system pods found
	I1020 13:26:47.843560  506566 system_pods.go:61] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:47.843566  506566 system_pods.go:61] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:47.843572  506566 system_pods.go:61] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:47.843577  506566 system_pods.go:61] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:47.843581  506566 system_pods.go:61] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:47.843585  506566 system_pods.go:61] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:47.843592  506566 system_pods.go:61] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:47.843598  506566 system_pods.go:61] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:47.843605  506566 system_pods.go:74] duration metric: took 4.160203ms to wait for pod list to return data ...
	I1020 13:26:47.843613  506566 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:26:47.849932  506566 default_sa.go:45] found service account: "default"
	I1020 13:26:47.850006  506566 default_sa.go:55] duration metric: took 6.386021ms for default service account to be created ...
	I1020 13:26:47.850031  506566 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:26:47.855227  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:47.855313  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:47.855353  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:47.855393  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:47.855427  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:47.855450  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:47.855481  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:47.855504  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:47.855528  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:47.855580  506566 retry.go:31] will retry after 268.502746ms: missing components: kube-dns
	I1020 13:26:48.129068  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:48.129106  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:48.129114  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:48.129119  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:48.129146  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:48.129155  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:48.129160  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:48.129164  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:48.129172  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:48.129190  506566 retry.go:31] will retry after 257.512221ms: missing components: kube-dns
	I1020 13:26:48.391401  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:48.391443  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:48.391450  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:48.391456  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:48.391461  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:48.391484  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:48.391496  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:48.391500  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:48.391506  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:48.391528  506566 retry.go:31] will retry after 380.497423ms: missing components: kube-dns
	I1020 13:26:48.776652  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:48.776684  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Running
	I1020 13:26:48.776692  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:48.776696  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:48.776701  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:48.776706  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:48.776711  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:48.776716  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:48.776720  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Running
	I1020 13:26:48.776728  506566 system_pods.go:126] duration metric: took 926.678074ms to wait for k8s-apps to be running ...
	I1020 13:26:48.776740  506566 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:26:48.776798  506566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:26:48.790900  506566 system_svc.go:56] duration metric: took 14.150934ms WaitForService to wait for kubelet
	I1020 13:26:48.790935  506566 kubeadm.go:586] duration metric: took 43.972700158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:26:48.790965  506566 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:26:48.793935  506566 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:26:48.794016  506566 node_conditions.go:123] node cpu capacity is 2
	I1020 13:26:48.794034  506566 node_conditions.go:105] duration metric: took 3.06354ms to run NodePressure ...
	I1020 13:26:48.794050  506566 start.go:241] waiting for startup goroutines ...
	I1020 13:26:48.794059  506566 start.go:246] waiting for cluster config update ...
	I1020 13:26:48.794070  506566 start.go:255] writing updated cluster config ...
	I1020 13:26:48.794397  506566 ssh_runner.go:195] Run: rm -f paused
	I1020 13:26:48.798191  506566 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:48.802262  506566 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnvj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.807422  506566 pod_ready.go:94] pod "coredns-66bc5c9577-nnvj2" is "Ready"
	I1020 13:26:48.807453  506566 pod_ready.go:86] duration metric: took 5.163097ms for pod "coredns-66bc5c9577-nnvj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.809897  506566 pod_ready.go:83] waiting for pod "etcd-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.815233  506566 pod_ready.go:94] pod "etcd-auto-308474" is "Ready"
	I1020 13:26:48.815260  506566 pod_ready.go:86] duration metric: took 5.334889ms for pod "etcd-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.818088  506566 pod_ready.go:83] waiting for pod "kube-apiserver-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.822744  506566 pod_ready.go:94] pod "kube-apiserver-auto-308474" is "Ready"
	I1020 13:26:48.822821  506566 pod_ready.go:86] duration metric: took 4.707247ms for pod "kube-apiserver-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.825516  506566 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:49.203094  506566 pod_ready.go:94] pod "kube-controller-manager-auto-308474" is "Ready"
	I1020 13:26:49.203122  506566 pod_ready.go:86] duration metric: took 377.529952ms for pod "kube-controller-manager-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:49.402321  506566 pod_ready.go:83] waiting for pod "kube-proxy-c6ssp" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:49.802405  506566 pod_ready.go:94] pod "kube-proxy-c6ssp" is "Ready"
	I1020 13:26:49.802431  506566 pod_ready.go:86] duration metric: took 400.084838ms for pod "kube-proxy-c6ssp" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:50.002903  506566 pod_ready.go:83] waiting for pod "kube-scheduler-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:50.402088  506566 pod_ready.go:94] pod "kube-scheduler-auto-308474" is "Ready"
	I1020 13:26:50.402117  506566 pod_ready.go:86] duration metric: took 399.137536ms for pod "kube-scheduler-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:50.402131  506566 pod_ready.go:40] duration metric: took 1.603906597s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:50.455071  506566 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:26:50.458595  506566 out.go:179] * Done! kubectl is now configured to use "auto-308474" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.233004709Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=714f57d1-e8c6-45f1-9a31-6d0ca345b927 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.234585932Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7ab6689f-58fb-48d8-ba3c-1e14fb0dd391 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.234675057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.239694259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.240012392Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a672f1408ea659b07e63c58a052b894dbbec7aaf62c96876e5d78bd8ff353224/merged/etc/passwd: no such file or directory"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.240113062Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a672f1408ea659b07e63c58a052b894dbbec7aaf62c96876e5d78bd8ff353224/merged/etc/group: no such file or directory"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.240459348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.273778562Z" level=info msg="Created container 11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b: kube-system/storage-provisioner/storage-provisioner" id=7ab6689f-58fb-48d8-ba3c-1e14fb0dd391 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.275491915Z" level=info msg="Starting container: 11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b" id=c3611939-1841-49ef-a1f8-3cc9d4c1d3b0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.277423198Z" level=info msg="Started container" PID=1636 containerID=11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b description=kube-system/storage-provisioner/storage-provisioner id=c3611939-1841-49ef-a1f8-3cc9d4c1d3b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2e4fcf81f2f292c68bb8a43456cb5ad09f0f67a832f2db89950b78e98a6fa80
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.924722336Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.928904168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.929086341Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.929503183Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.933049019Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.933175273Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.933256374Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.937866987Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.937894491Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.937913109Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.943933489Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.943965547Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.94398533Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.947686064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.947719812Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	11cd790594580       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           18 seconds ago      Running             storage-provisioner         2                   d2e4fcf81f2f2       storage-provisioner                          kube-system
	8d27bd77cd846       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   e6ed4649b8632       dashboard-metrics-scraper-6ffb444bf9-fxdmg   kubernetes-dashboard
	cc04991aeb9ac       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   87b227dab67a6       kubernetes-dashboard-855c9754f9-4sq6t        kubernetes-dashboard
	886a327b32a4b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago      Running             coredns                     1                   52899b32a22a6       coredns-66bc5c9577-czxmg                     kube-system
	31f05cb96945b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   a08519cbd1aab       kindnet-tqpf7                                kube-system
	38c23aad4fa88       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   cb84d4b6dba6a       kube-proxy-bv8x8                             kube-system
	6a30d7df87c11       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   8c590a4ec7099       busybox                                      default
	9a3ab7492a03c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           49 seconds ago      Exited              storage-provisioner         1                   d2e4fcf81f2f2       storage-provisioner                          kube-system
	8f15a98da5f33       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   84c09336907bf       kube-scheduler-no-preload-744804             kube-system
	6e11ee6379c80       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   13c3c6008f660       etcd-no-preload-744804                       kube-system
	1c3907b84b271       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   1157b1f3c1a64       kube-apiserver-no-preload-744804             kube-system
	4f36e401d485e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   9d641ae1c2e52       kube-controller-manager-no-preload-744804    kube-system
	
	
	==> coredns [886a327b32a4bf69e26cd65a10e8e6b11c1d668342dc5a21d9f727e71375f98b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57509 - 57690 "HINFO IN 3550779046318444203.7077144803148319311. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035534578s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-744804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-744804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=no-preload-744804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_24_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:24:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-744804
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:25:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-744804
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e6ebf1aa-cf6a-460e-af7e-a66b26d17d7c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-czxmg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m35s
	  kube-system                 etcd-no-preload-744804                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m40s
	  kube-system                 kindnet-tqpf7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m36s
	  kube-system                 kube-apiserver-no-preload-744804              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 kube-controller-manager-no-preload-744804     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 kube-proxy-bv8x8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-scheduler-no-preload-744804              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fxdmg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4sq6t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m33s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m52s (x8 over 2m52s)  kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m52s (x8 over 2m52s)  kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m52s (x8 over 2m52s)  kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m41s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m40s                  kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s                  kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m40s                  kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m37s                  node-controller  Node no-preload-744804 event: Registered Node no-preload-744804 in Controller
	  Normal   NodeReady                97s                    kubelet          Node no-preload-744804 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 59s)      kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 59s)      kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 59s)      kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node no-preload-744804 event: Registered Node no-preload-744804 in Controller
	
	
	==> dmesg <==
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	[Oct20 13:24] overlayfs: idmapped layers are currently not supported
	[Oct20 13:25] overlayfs: idmapped layers are currently not supported
	[ +42.548676] overlayfs: idmapped layers are currently not supported
	[Oct20 13:26] overlayfs: idmapped layers are currently not supported
	[Oct20 13:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [6e11ee6379c8057195df4b7174497050554e2746585cffbcff5d6ee674caccd2] <==
	{"level":"warn","ts":"2025-10-20T13:26:09.657465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.695008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.724884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.750529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.785651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.855265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.858674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.897038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.917230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.962244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.983057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.001641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.020478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.040872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.067584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.084866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.113434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.139732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.157222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.175626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.190865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.224908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.245132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.261455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.381901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32796","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:27:02 up  3:09,  0 user,  load average: 2.77, 2.93, 2.63
	Linux no-preload-744804 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31f05cb96945b51652801d40a5cb2c12ac111770818e466dcbcbef7e5df312b3] <==
	I1020 13:26:12.730803       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:26:12.731253       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:26:12.731431       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:26:12.731444       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:26:12.731458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:26:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:26:12.919799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:26:12.925180       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:26:12.925274       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:26:12.926308       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:26:42.919946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:26:42.926550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:26:42.926550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 13:26:42.926645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1020 13:26:44.525971       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:26:44.526137       1 metrics.go:72] Registering metrics
	I1020 13:26:44.526212       1 controller.go:711] "Syncing nftables rules"
	I1020 13:26:52.924452       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:26:52.924501       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1c3907b84b2719c834370b3a234bfcf74dccb4f164f5f6e62b92590abdba5b57] <==
	I1020 13:26:11.399777       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1020 13:26:11.400206       1 aggregator.go:171] initial CRD sync complete...
	I1020 13:26:11.400227       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 13:26:11.400236       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:26:11.400242       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:26:11.439875       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1020 13:26:11.459222       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:26:11.459893       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 13:26:11.467952       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:26:11.474878       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 13:26:11.474922       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:26:11.474938       1 policy_source.go:240] refreshing policies
	I1020 13:26:11.486817       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 13:26:11.516848       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:26:11.929992       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:26:12.067373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:26:12.103381       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:26:12.213420       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:26:12.357322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:26:12.420905       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:26:12.671000       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.1.0"}
	I1020 13:26:12.833773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.154.136"}
	I1020 13:26:15.598382       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:26:15.844573       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:26:16.044741       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4f36e401d485e4f4d90833026e33ea3530d32bdd15cccc9487bf620da50270af] <==
	I1020 13:26:15.596730       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:26:15.599253       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:26:15.604847       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:26:15.608245       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 13:26:15.610906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:26:15.614153       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:26:15.618357       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:26:15.622676       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:26:15.628869       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:26:15.637596       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:26:15.637704       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 13:26:15.637783       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 13:26:15.637856       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:26:15.638982       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:26:15.639083       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:26:15.639178       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:26:15.639247       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-744804"
	I1020 13:26:15.639291       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 13:26:15.647759       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 13:26:15.648987       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:26:15.649008       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:26:15.649016       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:26:15.653767       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 13:26:15.660746       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:26:15.662331       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [38c23aad4fa887459e239041be46dccc58a99edeb50d18acfa6a539f90c4f00e] <==
	I1020 13:26:13.146495       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:26:13.254386       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:26:13.354982       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:26:13.355026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:26:13.355118       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:26:13.375160       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:26:13.375285       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:26:13.379716       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:26:13.380149       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:26:13.380920       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:26:13.382276       1 config.go:200] "Starting service config controller"
	I1020 13:26:13.382348       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:26:13.382392       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:26:13.382440       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:26:13.382478       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:26:13.382512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:26:13.383153       1 config.go:309] "Starting node config controller"
	I1020 13:26:13.385951       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:26:13.386027       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:26:13.482728       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:26:13.482747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:26:13.482765       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8f15a98da5f338160fc0802f3aac18ef56c3a8ac8e7f0d8a95b82a15d0cbfba5] <==
	I1020 13:26:09.529071       1 serving.go:386] Generated self-signed cert in-memory
	I1020 13:26:12.943125       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:26:12.953692       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:26:12.967840       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:26:12.968043       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 13:26:12.968103       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 13:26:12.968154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:26:12.989019       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:26:13.005224       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:26:13.005277       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:26:13.005285       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:26:13.068702       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 13:26:13.108481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:26:13.108663       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:26:11 no-preload-744804 kubelet[765]: I1020 13:26:11.972133     765 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:26:14 no-preload-744804 kubelet[765]: I1020 13:26:14.349785     765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257406     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4sq6t\" (UID: \"7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4sq6t"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257468     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4cc\" (UniqueName: \"kubernetes.io/projected/e346a326-7591-4c13-9ccb-72ebc2cfac5f-kube-api-access-5m4cc\") pod \"dashboard-metrics-scraper-6ffb444bf9-fxdmg\" (UID: \"e346a326-7591-4c13-9ccb-72ebc2cfac5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257493     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e346a326-7591-4c13-9ccb-72ebc2cfac5f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fxdmg\" (UID: \"e346a326-7591-4c13-9ccb-72ebc2cfac5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257519     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgk5\" (UniqueName: \"kubernetes.io/projected/7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4-kube-api-access-nzgk5\") pod \"kubernetes-dashboard-855c9754f9-4sq6t\" (UID: \"7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4sq6t"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: W1020 13:26:16.483960     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/crio-e6ed4649b863223f775a5d9a56678a2e4ca5ebed105704e9ad61fff7216131d0 WatchSource:0}: Error finding container e6ed4649b863223f775a5d9a56678a2e4ca5ebed105704e9ad61fff7216131d0: Status 404 returned error can't find the container with id e6ed4649b863223f775a5d9a56678a2e4ca5ebed105704e9ad61fff7216131d0
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: W1020 13:26:16.497321     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/crio-87b227dab67a6f6cd3452be431dd9520c40faa66a8e8d3479b80f6a5264ea53c WatchSource:0}: Error finding container 87b227dab67a6f6cd3452be431dd9520c40faa66a8e8d3479b80f6a5264ea53c: Status 404 returned error can't find the container with id 87b227dab67a6f6cd3452be431dd9520c40faa66a8e8d3479b80f6a5264ea53c
	Oct 20 13:26:22 no-preload-744804 kubelet[765]: I1020 13:26:22.162538     765 scope.go:117] "RemoveContainer" containerID="e69010e6eec4c2a27f00170e8567ce8afc21a6f51862cc2741daf49aeefd2507"
	Oct 20 13:26:23 no-preload-744804 kubelet[765]: I1020 13:26:23.168409     765 scope.go:117] "RemoveContainer" containerID="e69010e6eec4c2a27f00170e8567ce8afc21a6f51862cc2741daf49aeefd2507"
	Oct 20 13:26:23 no-preload-744804 kubelet[765]: I1020 13:26:23.168736     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:23 no-preload-744804 kubelet[765]: E1020 13:26:23.168908     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:26 no-preload-744804 kubelet[765]: I1020 13:26:26.449526     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:26 no-preload-744804 kubelet[765]: E1020 13:26:26.449715     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:26 no-preload-744804 kubelet[765]: I1020 13:26:26.463328     765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4sq6t" podStartSLOduration=1.160297869 podStartE2EDuration="10.463306824s" podCreationTimestamp="2025-10-20 13:26:16 +0000 UTC" firstStartedPulling="2025-10-20 13:26:16.500749794 +0000 UTC m=+12.726717183" lastFinishedPulling="2025-10-20 13:26:25.803758733 +0000 UTC m=+22.029726138" observedRunningTime="2025-10-20 13:26:26.223859403 +0000 UTC m=+22.449826800" watchObservedRunningTime="2025-10-20 13:26:26.463306824 +0000 UTC m=+22.689274221"
	Oct 20 13:26:36 no-preload-744804 kubelet[765]: I1020 13:26:36.994111     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:37 no-preload-744804 kubelet[765]: I1020 13:26:37.212273     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:37 no-preload-744804 kubelet[765]: I1020 13:26:37.212585     765 scope.go:117] "RemoveContainer" containerID="8d27bd77cd846ddebfc1d2a5c08dc83af7d745f6da25670e469bb37556ffcac2"
	Oct 20 13:26:37 no-preload-744804 kubelet[765]: E1020 13:26:37.212750     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:43 no-preload-744804 kubelet[765]: I1020 13:26:43.229927     765 scope.go:117] "RemoveContainer" containerID="9a3ab7492a03c361b566afcb199e4bae1397925004be8ee7d219da31312fd02b"
	Oct 20 13:26:46 no-preload-744804 kubelet[765]: I1020 13:26:46.449084     765 scope.go:117] "RemoveContainer" containerID="8d27bd77cd846ddebfc1d2a5c08dc83af7d745f6da25670e469bb37556ffcac2"
	Oct 20 13:26:46 no-preload-744804 kubelet[765]: E1020 13:26:46.450633     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:58 no-preload-744804 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:26:58 no-preload-744804 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:26:58 no-preload-744804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc04991aeb9acd4eb11bb78237f8d40eb0cbc8fd30f87da44618819c62b1650e] <==
	2025/10/20 13:26:25 Starting overwatch
	2025/10/20 13:26:25 Using namespace: kubernetes-dashboard
	2025/10/20 13:26:25 Using in-cluster config to connect to apiserver
	2025/10/20 13:26:25 Using secret token for csrf signing
	2025/10/20 13:26:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:26:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:26:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 13:26:25 Generating JWE encryption key
	2025/10/20 13:26:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:26:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:26:26 Initializing JWE encryption key from synchronized object
	2025/10/20 13:26:26 Creating in-cluster Sidecar client
	2025/10/20 13:26:26 Serving insecurely on HTTP port: 9090
	2025/10/20 13:26:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:26:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b] <==
	I1020 13:26:43.291871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:26:43.304034       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:26:43.304088       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:26:43.314252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:46.769553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:51.038639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:54.638415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:57.691734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:00.713961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:00.719436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:27:00.719588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:27:00.719911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"570f9e7e-e0b0-42c3-8be9-6674d6360b18", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-744804_71705cee-8782-461d-a060-ca10a8117718 became leader
	I1020 13:27:00.719938       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-744804_71705cee-8782-461d-a060-ca10a8117718!
	W1020 13:27:00.726466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:00.742675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:27:00.820621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-744804_71705cee-8782-461d-a060-ca10a8117718!
	
	
	==> storage-provisioner [9a3ab7492a03c361b566afcb199e4bae1397925004be8ee7d219da31312fd02b] <==
	I1020 13:26:13.091393       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:26:43.108555       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-744804 -n no-preload-744804
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-744804 -n no-preload-744804: exit status 2 (532.794951ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-744804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-744804
helpers_test.go:243: (dbg) docker inspect no-preload-744804:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41",
	        "Created": "2025-10-20T13:23:35.394425539Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 509487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-20T13:25:56.407662694Z",
	            "FinishedAt": "2025-10-20T13:25:55.229292782Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/hostname",
	        "HostsPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/hosts",
	        "LogPath": "/var/lib/docker/containers/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41-json.log",
	        "Name": "/no-preload-744804",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-744804:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-744804",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41",
	                "LowerDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5-init/diff:/var/lib/docker/overlay2/e011614c969e3e5ed6757526241756ea6fc672789a7660b2948a8b7c8180b97b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/22f24099b185b72da7ee022e5624c6092520cdeb32d998f51fc3c7f4e2d251f5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-744804",
	                "Source": "/var/lib/docker/volumes/no-preload-744804/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-744804",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-744804",
	                "name.minikube.sigs.k8s.io": "no-preload-744804",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce55b2cab51cff8c423d4ede4796543b2fbfda1944eaeac257f8855be870e989",
	            "SandboxKey": "/var/run/docker/netns/ce55b2cab51c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-744804": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:b7:74:9e:92:97",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "307dee052f6f076bff152f38e429e93b9787d013b30129b59f6e7b891323decf",
	                    "EndpointID": "b21dfe8d4bad29545064b73f4c4e4313da88156ec8ee1252a07990cdf1676c70",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-744804",
	                        "7c7d00bb470e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804: exit status 2 (440.372685ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-744804 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-744804 logs -n 25: (1.564077229s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p default-k8s-diff-port-794175                                                                                                                                                                                                               │ default-k8s-diff-port-794175 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ delete  │ -p disable-driver-mounts-972433                                                                                                                                                                                                               │ disable-driver-mounts-972433 │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:23 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:23 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ embed-certs-979197 image list --format=json                                                                                                                                                                                                   │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ pause   │ -p embed-certs-979197 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ delete  │ -p embed-certs-979197                                                                                                                                                                                                                         │ embed-certs-979197           │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-018730 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │                     │
	│ stop    │ -p newest-cni-018730 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:24 UTC │ 20 Oct 25 13:25 UTC │
	│ addons  │ enable dashboard -p newest-cni-018730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ image   │ newest-cni-018730 image list --format=json                                                                                                                                                                                                    │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ pause   │ -p newest-cni-018730 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	│ delete  │ -p newest-cni-018730                                                                                                                                                                                                                          │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ delete  │ -p newest-cni-018730                                                                                                                                                                                                                          │ newest-cni-018730            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p auto-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-308474                  │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:26 UTC │
	│ addons  │ enable metrics-server -p no-preload-744804 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │                     │
	│ stop    │ -p no-preload-744804 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ addons  │ enable dashboard -p no-preload-744804 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:25 UTC │
	│ start   │ -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:25 UTC │ 20 Oct 25 13:26 UTC │
	│ ssh     │ -p auto-308474 pgrep -a kubelet                                                                                                                                                                                                               │ auto-308474                  │ jenkins │ v1.37.0 │ 20 Oct 25 13:26 UTC │ 20 Oct 25 13:26 UTC │
	│ image   │ no-preload-744804 image list --format=json                                                                                                                                                                                                    │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:26 UTC │ 20 Oct 25 13:26 UTC │
	│ pause   │ -p no-preload-744804 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-744804            │ jenkins │ v1.37.0 │ 20 Oct 25 13:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 13:25:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 13:25:56.011636  509360 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:25:56.012318  509360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:56.012359  509360 out.go:374] Setting ErrFile to fd 2...
	I1020 13:25:56.013984  509360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:25:56.014342  509360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:25:56.014866  509360 out.go:368] Setting JSON to false
	I1020 13:25:56.015985  509360 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11306,"bootTime":1760955450,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:25:56.016092  509360 start.go:141] virtualization:  
	I1020 13:25:56.019296  509360 out.go:179] * [no-preload-744804] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:25:56.023201  509360 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:25:56.023278  509360 notify.go:220] Checking for updates...
	I1020 13:25:56.029292  509360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:25:56.032090  509360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:25:56.034964  509360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:25:56.037864  509360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:25:56.040767  509360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:25:56.044229  509360 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:25:56.044883  509360 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:25:56.071290  509360 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:25:56.071418  509360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:56.169397  509360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:25:56.155087483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:56.169494  509360 docker.go:318] overlay module found
	I1020 13:25:56.172864  509360 out.go:179] * Using the docker driver based on existing profile
	I1020 13:25:56.175829  509360 start.go:305] selected driver: docker
	I1020 13:25:56.175850  509360 start.go:925] validating driver "docker" against &{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:56.175956  509360 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:25:56.176650  509360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:25:56.293743  509360 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-20 13:25:56.278849902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:25:56.294087  509360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:25:56.294114  509360 cni.go:84] Creating CNI manager for ""
	I1020 13:25:56.294169  509360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:25:56.294211  509360 start.go:349] cluster config:
	{Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:25:56.297485  509360 out.go:179] * Starting "no-preload-744804" primary control-plane node in "no-preload-744804" cluster
	I1020 13:25:56.300454  509360 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 13:25:56.303322  509360 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1020 13:25:56.306237  509360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:25:56.306388  509360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:25:56.306713  509360 cache.go:107] acquiring lock: {Name:mk2466d3c957a995adbebbabeab0fa3cc60b0749 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.306800  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1020 13:25:56.306808  509360 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.741µs
	I1020 13:25:56.306816  509360 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1020 13:25:56.306827  509360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 13:25:56.307008  509360 cache.go:107] acquiring lock: {Name:mk2f501eec0d7af6312aef6efa1f5bbad5f4d684 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307055  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1020 13:25:56.307061  509360 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 58.175µs
	I1020 13:25:56.307068  509360 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1020 13:25:56.307078  509360 cache.go:107] acquiring lock: {Name:mk91e48e01c9d742f280bc2f9044086cb15ac8fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307112  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1020 13:25:56.307117  509360 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 40.238µs
	I1020 13:25:56.307122  509360 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1020 13:25:56.307131  509360 cache.go:107] acquiring lock: {Name:mk06b7edc57ee881bc4af5e7d1c0bb5270ebff49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307162  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1020 13:25:56.307166  509360 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 36.004µs
	I1020 13:25:56.307174  509360 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1020 13:25:56.307183  509360 cache.go:107] acquiring lock: {Name:mk1d0a9075d8d12111d126a101053db6ac0a7b69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307214  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1020 13:25:56.307218  509360 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 36.776µs
	I1020 13:25:56.307225  509360 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1020 13:25:56.307235  509360 cache.go:107] acquiring lock: {Name:mkd8eb3de224a6da14efa26f40075e815e71b6a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307263  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1020 13:25:56.307268  509360 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 33.756µs
	I1020 13:25:56.307273  509360 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1020 13:25:56.307288  509360 cache.go:107] acquiring lock: {Name:mk76c9e0dd61216d0c0ba53e6cfb9cbe19ddfd70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307315  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1020 13:25:56.307320  509360 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.451µs
	I1020 13:25:56.307326  509360 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1020 13:25:56.307335  509360 cache.go:107] acquiring lock: {Name:mkf695cbf431ff83306d5e1211f07fc194d769c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.307360  509360 cache.go:115] /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1020 13:25:56.307371  509360 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 30.933µs
	I1020 13:25:56.307377  509360 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1020 13:25:56.307383  509360 cache.go:87] Successfully saved all images to host disk.
	I1020 13:25:56.337199  509360 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1020 13:25:56.337219  509360 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1020 13:25:56.337232  509360 cache.go:232] Successfully downloaded all kic artifacts
	I1020 13:25:56.337254  509360 start.go:360] acquireMachinesLock for no-preload-744804: {Name:mk60261f5e12334720a2e0b8e33ce6265dbb09b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1020 13:25:56.337310  509360 start.go:364] duration metric: took 35.233µs to acquireMachinesLock for "no-preload-744804"
	I1020 13:25:56.337337  509360 start.go:96] Skipping create...Using existing machine configuration
	I1020 13:25:56.337346  509360 fix.go:54] fixHost starting: 
	I1020 13:25:56.337590  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:25:56.359879  509360 fix.go:112] recreateIfNeeded on no-preload-744804: state=Stopped err=<nil>
	W1020 13:25:56.359915  509360 fix.go:138] unexpected machine state, will restart: <nil>
	I1020 13:25:55.772525  506566 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.14659572s
	I1020 13:25:58.189668  506566 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.56676406s
	I1020 13:25:58.626266  506566 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.003290784s
	I1020 13:25:58.647080  506566 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1020 13:25:58.663387  506566 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1020 13:25:58.683497  506566 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1020 13:25:58.683978  506566 kubeadm.go:318] [mark-control-plane] Marking the node auto-308474 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1020 13:25:58.699134  506566 kubeadm.go:318] [bootstrap-token] Using token: rifdlr.uoz69o4zgmb29avx
	I1020 13:25:58.702126  506566 out.go:252]   - Configuring RBAC rules ...
	I1020 13:25:58.702281  506566 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1020 13:25:58.708578  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1020 13:25:58.717677  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1020 13:25:58.721787  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1020 13:25:58.726185  506566 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1020 13:25:58.730501  506566 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1020 13:25:59.033644  506566 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1020 13:25:59.493714  506566 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1020 13:26:00.135533  506566 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1020 13:26:00.135569  506566 kubeadm.go:318] 
	I1020 13:26:00.135656  506566 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1020 13:26:00.135668  506566 kubeadm.go:318] 
	I1020 13:26:00.135756  506566 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1020 13:26:00.135761  506566 kubeadm.go:318] 
	I1020 13:26:00.148504  506566 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1020 13:26:00.148613  506566 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1020 13:26:00.148669  506566 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1020 13:26:00.148675  506566 kubeadm.go:318] 
	I1020 13:26:00.148733  506566 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1020 13:26:00.148757  506566 kubeadm.go:318] 
	I1020 13:26:00.148809  506566 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1020 13:26:00.148817  506566 kubeadm.go:318] 
	I1020 13:26:00.148872  506566 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1020 13:26:00.148952  506566 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1020 13:26:00.149023  506566 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1020 13:26:00.149030  506566 kubeadm.go:318] 
	I1020 13:26:00.149120  506566 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1020 13:26:00.149201  506566 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1020 13:26:00.149206  506566 kubeadm.go:318] 
	I1020 13:26:00.149296  506566 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token rifdlr.uoz69o4zgmb29avx \
	I1020 13:26:00.149404  506566 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 \
	I1020 13:26:00.149427  506566 kubeadm.go:318] 	--control-plane 
	I1020 13:26:00.149431  506566 kubeadm.go:318] 
	I1020 13:26:00.149521  506566 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1020 13:26:00.149526  506566 kubeadm.go:318] 
	I1020 13:26:00.149612  506566 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token rifdlr.uoz69o4zgmb29avx \
	I1020 13:26:00.149720  506566 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:b1db577fa4a9e8f0c7058692e8e359fba9190de23bc03cf08f24dd22d7e49db5 
	I1020 13:26:00.169890  506566 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1020 13:26:00.170140  506566 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1020 13:26:00.170262  506566 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1020 13:26:00.170294  506566 cni.go:84] Creating CNI manager for ""
	I1020 13:26:00.170315  506566 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:26:00.173507  506566 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1020 13:25:56.363118  509360 out.go:252] * Restarting existing docker container for "no-preload-744804" ...
	I1020 13:25:56.363198  509360 cli_runner.go:164] Run: docker start no-preload-744804
	I1020 13:25:56.734348  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:25:56.758968  509360 kic.go:430] container "no-preload-744804" state is running.
	I1020 13:25:56.759368  509360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:25:56.786036  509360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/config.json ...
	I1020 13:25:56.786271  509360 machine.go:93] provisionDockerMachine start ...
	I1020 13:25:56.786326  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:25:56.813992  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:56.814325  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:25:56.814334  509360 main.go:141] libmachine: About to run SSH command:
	hostname
	I1020 13:25:56.815168  509360 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54772->127.0.0.1:33468: read: connection reset by peer
	I1020 13:25:59.964048  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:25:59.964134  509360 ubuntu.go:182] provisioning hostname "no-preload-744804"
	I1020 13:25:59.964231  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:25:59.982246  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:25:59.982549  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:25:59.982565  509360 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-744804 && echo "no-preload-744804" | sudo tee /etc/hostname
	I1020 13:26:00.345250  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-744804
	
	I1020 13:26:00.345347  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:00.370643  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:26:00.371000  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:26:00.371034  509360 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-744804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-744804/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-744804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1020 13:26:00.557214  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1020 13:26:00.557243  509360 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21773-296391/.minikube CaCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21773-296391/.minikube}
	I1020 13:26:00.557266  509360 ubuntu.go:190] setting up certificates
	I1020 13:26:00.557276  509360 provision.go:84] configureAuth start
	I1020 13:26:00.557337  509360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:26:00.589850  509360 provision.go:143] copyHostCerts
	I1020 13:26:00.589932  509360 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem, removing ...
	I1020 13:26:00.589958  509360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem
	I1020 13:26:00.590041  509360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/ca.pem (1078 bytes)
	I1020 13:26:00.590152  509360 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem, removing ...
	I1020 13:26:00.590162  509360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem
	I1020 13:26:00.590191  509360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/cert.pem (1123 bytes)
	I1020 13:26:00.590253  509360 exec_runner.go:144] found /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem, removing ...
	I1020 13:26:00.590264  509360 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem
	I1020 13:26:00.590298  509360 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21773-296391/.minikube/key.pem (1679 bytes)
	I1020 13:26:00.590366  509360 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem org=jenkins.no-preload-744804 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-744804]
	I1020 13:26:00.965770  509360 provision.go:177] copyRemoteCerts
	I1020 13:26:00.965847  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1020 13:26:00.965899  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:00.986666  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:00.176514  506566 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1020 13:26:00.212180  506566 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1020 13:26:00.212211  506566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1020 13:26:00.322114  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1020 13:26:00.875621  506566 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1020 13:26:00.875757  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:00.875837  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-308474 minikube.k8s.io/updated_at=2025_10_20T13_26_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6 minikube.k8s.io/name=auto-308474 minikube.k8s.io/primary=true
	I1020 13:26:00.908118  506566 ops.go:34] apiserver oom_adj: -16
	I1020 13:26:01.112775  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:01.094481  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1020 13:26:01.119812  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1020 13:26:01.146357  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1020 13:26:01.169932  509360 provision.go:87] duration metric: took 612.629829ms to configureAuth
	I1020 13:26:01.169961  509360 ubuntu.go:206] setting minikube options for container-runtime
	I1020 13:26:01.170158  509360 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:26:01.170284  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.190220  509360 main.go:141] libmachine: Using SSH client type: native
	I1020 13:26:01.190542  509360 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1020 13:26:01.190565  509360 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1020 13:26:01.547120  509360 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1020 13:26:01.547144  509360 machine.go:96] duration metric: took 4.760864246s to provisionDockerMachine
	I1020 13:26:01.547156  509360 start.go:293] postStartSetup for "no-preload-744804" (driver="docker")
	I1020 13:26:01.547167  509360 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1020 13:26:01.547224  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1020 13:26:01.547261  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.577174  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:01.688993  509360 ssh_runner.go:195] Run: cat /etc/os-release
	I1020 13:26:01.693044  509360 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1020 13:26:01.693071  509360 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1020 13:26:01.693082  509360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/addons for local assets ...
	I1020 13:26:01.693139  509360 filesync.go:126] Scanning /home/jenkins/minikube-integration/21773-296391/.minikube/files for local assets ...
	I1020 13:26:01.693214  509360 filesync.go:149] local asset: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem -> 2982592.pem in /etc/ssl/certs
	I1020 13:26:01.693311  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1020 13:26:01.702124  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:26:01.721940  509360 start.go:296] duration metric: took 174.769245ms for postStartSetup
	I1020 13:26:01.722020  509360 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:26:01.722061  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.739094  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:01.841537  509360 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1020 13:26:01.846292  509360 fix.go:56] duration metric: took 5.508938955s for fixHost
	I1020 13:26:01.846318  509360 start.go:83] releasing machines lock for "no-preload-744804", held for 5.508994308s
	I1020 13:26:01.846389  509360 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-744804
	I1020 13:26:01.864646  509360 ssh_runner.go:195] Run: cat /version.json
	I1020 13:26:01.864716  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.864981  509360 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1020 13:26:01.865031  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:01.886809  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:01.898222  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:02.094351  509360 ssh_runner.go:195] Run: systemctl --version
	I1020 13:26:02.101254  509360 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1020 13:26:02.151814  509360 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1020 13:26:02.158831  509360 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1020 13:26:02.159078  509360 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1020 13:26:02.168389  509360 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1020 13:26:02.168415  509360 start.go:495] detecting cgroup driver to use...
	I1020 13:26:02.168478  509360 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1020 13:26:02.168570  509360 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1020 13:26:02.185562  509360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1020 13:26:02.203363  509360 docker.go:218] disabling cri-docker service (if available) ...
	I1020 13:26:02.203468  509360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1020 13:26:02.223031  509360 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1020 13:26:02.243032  509360 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1020 13:26:02.381801  509360 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1020 13:26:02.510104  509360 docker.go:234] disabling docker service ...
	I1020 13:26:02.510181  509360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1020 13:26:02.525569  509360 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1020 13:26:02.538798  509360 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1020 13:26:02.696607  509360 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1020 13:26:02.858499  509360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1020 13:26:02.873757  509360 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1020 13:26:02.888259  509360 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1020 13:26:02.888336  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.898371  509360 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1020 13:26:02.898446  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.908797  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.917961  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.927439  509360 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1020 13:26:02.935928  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.944871  509360 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.954562  509360 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1020 13:26:02.964022  509360 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1020 13:26:02.971746  509360 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1020 13:26:02.979320  509360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:03.103739  509360 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1020 13:26:03.263184  509360 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1020 13:26:03.263254  509360 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1020 13:26:03.267219  509360 start.go:563] Will wait 60s for crictl version
	I1020 13:26:03.267283  509360 ssh_runner.go:195] Run: which crictl
	I1020 13:26:03.270760  509360 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1020 13:26:03.303826  509360 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1020 13:26:03.303909  509360 ssh_runner.go:195] Run: crio --version
	I1020 13:26:03.342303  509360 ssh_runner.go:195] Run: crio --version
	I1020 13:26:03.380191  509360 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1020 13:26:01.614184  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:02.113591  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:02.613628  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:03.112981  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:03.613585  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:04.112871  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:04.613448  506566 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1020 13:26:04.817250  506566 kubeadm.go:1113] duration metric: took 3.941536772s to wait for elevateKubeSystemPrivileges
	I1020 13:26:04.817279  506566 kubeadm.go:402] duration metric: took 23.051102722s to StartCluster
	I1020 13:26:04.817296  506566 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:04.817353  506566 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:26:04.817998  506566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:04.818211  506566 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:26:04.818369  506566 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1020 13:26:04.818629  506566 config.go:182] Loaded profile config "auto-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:26:04.818665  506566 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:26:04.818732  506566 addons.go:69] Setting storage-provisioner=true in profile "auto-308474"
	I1020 13:26:04.818747  506566 addons.go:238] Setting addon storage-provisioner=true in "auto-308474"
	I1020 13:26:04.818770  506566 host.go:66] Checking if "auto-308474" exists ...
	I1020 13:26:04.819306  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:26:04.819800  506566 addons.go:69] Setting default-storageclass=true in profile "auto-308474"
	I1020 13:26:04.819825  506566 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-308474"
	I1020 13:26:04.820130  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:26:04.825315  506566 out.go:179] * Verifying Kubernetes components...
	I1020 13:26:04.828585  506566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:04.864779  506566 addons.go:238] Setting addon default-storageclass=true in "auto-308474"
	I1020 13:26:04.864821  506566 host.go:66] Checking if "auto-308474" exists ...
	I1020 13:26:04.865239  506566 cli_runner.go:164] Run: docker container inspect auto-308474 --format={{.State.Status}}
	I1020 13:26:04.874373  506566 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:26:03.382951  509360 cli_runner.go:164] Run: docker network inspect no-preload-744804 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1020 13:26:03.402466  509360 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1020 13:26:03.407469  509360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:26:03.418070  509360 kubeadm.go:883] updating cluster {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1020 13:26:03.418179  509360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 13:26:03.418229  509360 ssh_runner.go:195] Run: sudo crictl images --output json
	I1020 13:26:03.455769  509360 crio.go:514] all images are preloaded for cri-o runtime.
	I1020 13:26:03.455797  509360 cache_images.go:85] Images are preloaded, skipping loading
	I1020 13:26:03.455806  509360 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1020 13:26:03.455900  509360 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-744804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1020 13:26:03.455991  509360 ssh_runner.go:195] Run: crio config
	I1020 13:26:03.525952  509360 cni.go:84] Creating CNI manager for ""
	I1020 13:26:03.525982  509360 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 13:26:03.526039  509360 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1020 13:26:03.526081  509360 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-744804 NodeName:no-preload-744804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1020 13:26:03.526245  509360 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-744804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1020 13:26:03.526335  509360 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1020 13:26:03.534664  509360 binaries.go:44] Found k8s binaries, skipping transfer
	I1020 13:26:03.534785  509360 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1020 13:26:03.542924  509360 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1020 13:26:03.562521  509360 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1020 13:26:03.576879  509360 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1020 13:26:03.590357  509360 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1020 13:26:03.594011  509360 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1020 13:26:03.605663  509360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:03.756200  509360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:26:03.774171  509360 certs.go:69] Setting up /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804 for IP: 192.168.76.2
	I1020 13:26:03.774208  509360 certs.go:195] generating shared ca certs ...
	I1020 13:26:03.774253  509360 certs.go:227] acquiring lock for ca certs: {Name:mke3b076a0160d1a893e4d179c92e104fac92954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:03.774425  509360 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key
	I1020 13:26:03.774497  509360 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key
	I1020 13:26:03.774513  509360 certs.go:257] generating profile certs ...
	I1020 13:26:03.774617  509360 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.key
	I1020 13:26:03.774718  509360 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key.c014680a
	I1020 13:26:03.774839  509360 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key
	I1020 13:26:03.774996  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem (1338 bytes)
	W1020 13:26:03.775053  509360 certs.go:480] ignoring /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259_empty.pem, impossibly tiny 0 bytes
	I1020 13:26:03.775065  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca-key.pem (1675 bytes)
	I1020 13:26:03.775091  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/ca.pem (1078 bytes)
	I1020 13:26:03.775135  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/cert.pem (1123 bytes)
	I1020 13:26:03.775166  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/certs/key.pem (1679 bytes)
	I1020 13:26:03.775236  509360 certs.go:484] found cert: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem (1708 bytes)
	I1020 13:26:03.777376  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1020 13:26:03.798655  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1020 13:26:03.823043  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1020 13:26:03.844434  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1020 13:26:03.874933  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1020 13:26:03.898335  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1020 13:26:03.922563  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1020 13:26:03.959069  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1020 13:26:04.025796  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1020 13:26:04.057437  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/certs/298259.pem --> /usr/share/ca-certificates/298259.pem (1338 bytes)
	I1020 13:26:04.085816  509360 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/ssl/certs/2982592.pem --> /usr/share/ca-certificates/2982592.pem (1708 bytes)
	I1020 13:26:04.122873  509360 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1020 13:26:04.137997  509360 ssh_runner.go:195] Run: openssl version
	I1020 13:26:04.145226  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298259.pem && ln -fs /usr/share/ca-certificates/298259.pem /etc/ssl/certs/298259.pem"
	I1020 13:26:04.154671  509360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298259.pem
	I1020 13:26:04.162534  509360 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 20 12:24 /usr/share/ca-certificates/298259.pem
	I1020 13:26:04.162652  509360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298259.pem
	I1020 13:26:04.209332  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298259.pem /etc/ssl/certs/51391683.0"
	I1020 13:26:04.217691  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982592.pem && ln -fs /usr/share/ca-certificates/2982592.pem /etc/ssl/certs/2982592.pem"
	I1020 13:26:04.226417  509360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982592.pem
	I1020 13:26:04.232517  509360 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 20 12:24 /usr/share/ca-certificates/2982592.pem
	I1020 13:26:04.232635  509360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982592.pem
	I1020 13:26:04.275187  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982592.pem /etc/ssl/certs/3ec20f2e.0"
	I1020 13:26:04.283347  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1020 13:26:04.294010  509360 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:26:04.298587  509360 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 20 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:26:04.298707  509360 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1020 13:26:04.341912  509360 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1020 13:26:04.350282  509360 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1020 13:26:04.354968  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1020 13:26:04.417750  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1020 13:26:04.511265  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1020 13:26:04.624827  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1020 13:26:04.844748  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1020 13:26:05.007389  509360 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1020 13:26:05.169071  509360 kubeadm.go:400] StartCluster: {Name:no-preload-744804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-744804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 13:26:05.169174  509360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1020 13:26:05.169246  509360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1020 13:26:05.241879  509360 cri.go:89] found id: "8f15a98da5f338160fc0802f3aac18ef56c3a8ac8e7f0d8a95b82a15d0cbfba5"
	I1020 13:26:05.241904  509360 cri.go:89] found id: "6e11ee6379c8057195df4b7174497050554e2746585cffbcff5d6ee674caccd2"
	I1020 13:26:05.241910  509360 cri.go:89] found id: "1c3907b84b2719c834370b3a234bfcf74dccb4f164f5f6e62b92590abdba5b57"
	I1020 13:26:05.241914  509360 cri.go:89] found id: "4f36e401d485e4f4d90833026e33ea3530d32bdd15cccc9487bf620da50270af"
	I1020 13:26:05.241917  509360 cri.go:89] found id: ""
	I1020 13:26:05.241969  509360 ssh_runner.go:195] Run: sudo runc list -f json
	W1020 13:26:05.266139  509360 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T13:26:05Z" level=error msg="open /run/runc: no such file or directory"
	I1020 13:26:05.266235  509360 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1020 13:26:05.282875  509360 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1020 13:26:05.282898  509360 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1020 13:26:05.282971  509360 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1020 13:26:05.304820  509360 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1020 13:26:05.305438  509360 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-744804" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:26:05.305690  509360 kubeconfig.go:62] /home/jenkins/minikube-integration/21773-296391/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-744804" cluster setting kubeconfig missing "no-preload-744804" context setting]
	I1020 13:26:05.306182  509360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:05.307579  509360 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1020 13:26:05.333651  509360 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1020 13:26:05.333687  509360 kubeadm.go:601] duration metric: took 50.78306ms to restartPrimaryControlPlane
	I1020 13:26:05.333697  509360 kubeadm.go:402] duration metric: took 164.637418ms to StartCluster
	I1020 13:26:05.333720  509360 settings.go:142] acquiring lock: {Name:mkcc78ffad256e56eca2b4c758506780a80dded8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:05.333801  509360 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:26:05.334811  509360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/kubeconfig: {Name:mkbeb1a856315ede8573a4c86cfe32fa8822a6ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 13:26:05.335052  509360 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1020 13:26:05.335452  509360 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1020 13:26:05.335576  509360 addons.go:69] Setting storage-provisioner=true in profile "no-preload-744804"
	I1020 13:26:05.335592  509360 addons.go:238] Setting addon storage-provisioner=true in "no-preload-744804"
	W1020 13:26:05.335598  509360 addons.go:247] addon storage-provisioner should already be in state true
	I1020 13:26:05.335622  509360 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:05.335637  509360 addons.go:69] Setting dashboard=true in profile "no-preload-744804"
	I1020 13:26:05.335654  509360 addons.go:238] Setting addon dashboard=true in "no-preload-744804"
	W1020 13:26:05.335660  509360 addons.go:247] addon dashboard should already be in state true
	I1020 13:26:05.335681  509360 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:05.336082  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.336208  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.336574  509360 addons.go:69] Setting default-storageclass=true in profile "no-preload-744804"
	I1020 13:26:05.336600  509360 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-744804"
	I1020 13:26:05.336879  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.335508  509360 config.go:182] Loaded profile config "no-preload-744804": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:26:05.341132  509360 out.go:179] * Verifying Kubernetes components...
	I1020 13:26:05.344814  509360 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1020 13:26:05.394881  509360 addons.go:238] Setting addon default-storageclass=true in "no-preload-744804"
	W1020 13:26:05.394899  509360 addons.go:247] addon default-storageclass should already be in state true
	I1020 13:26:05.394923  509360 host.go:66] Checking if "no-preload-744804" exists ...
	I1020 13:26:05.395335  509360 cli_runner.go:164] Run: docker container inspect no-preload-744804 --format={{.State.Status}}
	I1020 13:26:05.397623  509360 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1020 13:26:05.400619  509360 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1020 13:26:05.403544  509360 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:05.403566  509360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:26:05.403629  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:05.407006  509360 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1020 13:26:05.412451  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1020 13:26:05.412482  509360 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1020 13:26:05.412582  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:05.448637  509360 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:05.448657  509360 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:26:05.448719  509360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-744804
	I1020 13:26:05.451939  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:05.477250  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:05.488686  509360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/no-preload-744804/id_rsa Username:docker}
	I1020 13:26:05.858631  509360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:05.957772  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1020 13:26:05.957839  509360 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1020 13:26:05.988006  509360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:06.000801  509360 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:26:04.879439  506566 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:04.879480  506566 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1020 13:26:04.879547  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:26:04.910856  506566 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:04.910876  506566 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1020 13:26:04.910941  506566 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-308474
	I1020 13:26:04.925762  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:26:04.942261  506566 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/auto-308474/id_rsa Username:docker}
	I1020 13:26:05.420684  506566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1020 13:26:05.509176  506566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1020 13:26:05.615683  506566 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1020 13:26:05.793377  506566 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1020 13:26:05.794300  506566 node_ready.go:35] waiting up to 15m0s for node "auto-308474" to be "Ready" ...
	I1020 13:26:07.267783  506566 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.652060265s)
	I1020 13:26:07.268010  506566 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.474598979s)
	I1020 13:26:07.268032  506566 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1020 13:26:07.270928  506566 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1020 13:26:06.122888  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1020 13:26:06.122962  509360 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1020 13:26:06.235466  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1020 13:26:06.235542  509360 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1020 13:26:06.320072  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1020 13:26:06.320143  509360 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1020 13:26:06.361287  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1020 13:26:06.361359  509360 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1020 13:26:06.390727  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1020 13:26:06.390798  509360 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1020 13:26:06.422072  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1020 13:26:06.422146  509360 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1020 13:26:06.507162  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1020 13:26:06.507233  509360 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1020 13:26:06.545636  509360 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:26:06.545705  509360 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1020 13:26:06.571037  509360 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1020 13:26:07.273997  506566 addons.go:514] duration metric: took 2.455311566s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1020 13:26:07.772513  506566 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-308474" context rescaled to 1 replicas
	W1020 13:26:07.797620  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:09.798106  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	I1020 13:26:13.082994  509360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.09490748s)
	I1020 13:26:13.083080  509360 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.082184471s)
	I1020 13:26:13.083311  509360 node_ready.go:35] waiting up to 6m0s for node "no-preload-744804" to be "Ready" ...
	I1020 13:26:13.083188  509360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.512070824s)
	I1020 13:26:13.084567  509360 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.225855038s)
	I1020 13:26:13.087629  509360 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-744804 addons enable metrics-server
	
	I1020 13:26:13.125283  509360 node_ready.go:49] node "no-preload-744804" is "Ready"
	I1020 13:26:13.125367  509360 node_ready.go:38] duration metric: took 42.042571ms for node "no-preload-744804" to be "Ready" ...
	I1020 13:26:13.125395  509360 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:26:13.125490  509360 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:26:13.162119  509360 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1020 13:26:13.165046  509360 addons.go:514] duration metric: took 7.8295782s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1020 13:26:13.174043  509360 api_server.go:72] duration metric: took 7.838934116s to wait for apiserver process to appear ...
	I1020 13:26:13.174131  509360 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:26:13.174174  509360 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:26:13.184515  509360 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1020 13:26:13.184599  509360 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1020 13:26:13.675278  509360 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1020 13:26:13.684406  509360 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1020 13:26:13.685629  509360 api_server.go:141] control plane version: v1.34.1
	I1020 13:26:13.685655  509360 api_server.go:131] duration metric: took 511.504558ms to wait for apiserver health ...
	I1020 13:26:13.685665  509360 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:26:13.689426  509360 system_pods.go:59] 8 kube-system pods found
	I1020 13:26:13.689466  509360 system_pods.go:61] "coredns-66bc5c9577-czxmg" [dfe5480f-3c87-4f50-8890-9aeb8740860b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:13.689476  509360 system_pods.go:61] "etcd-no-preload-744804" [861cd06e-ae97-40a2-94f3-c36f118ae148] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:26:13.689520  509360 system_pods.go:61] "kindnet-tqpf7" [d65258f0-f2a5-4c71-910b-d148291111ae] Running
	I1020 13:26:13.689528  509360 system_pods.go:61] "kube-apiserver-no-preload-744804" [5045b24e-f1ef-4e65-938c-3999ea03c565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:26:13.689535  509360 system_pods.go:61] "kube-controller-manager-no-preload-744804" [f842efbf-e39d-4c96-b2d2-14918e2a33a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:26:13.689545  509360 system_pods.go:61] "kube-proxy-bv8x8" [835b8b0c-6e21-43be-9656-1e09387eab43] Running
	I1020 13:26:13.689552  509360 system_pods.go:61] "kube-scheduler-no-preload-744804" [469f86bf-dc90-42fe-9d33-901b8c97aabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:26:13.689556  509360 system_pods.go:61] "storage-provisioner" [31880320-20a8-4cbe-b5c2-4b1a321c8501] Running
	I1020 13:26:13.689579  509360 system_pods.go:74] duration metric: took 3.907843ms to wait for pod list to return data ...
	I1020 13:26:13.689594  509360 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:26:13.692142  509360 default_sa.go:45] found service account: "default"
	I1020 13:26:13.692167  509360 default_sa.go:55] duration metric: took 2.565966ms for default service account to be created ...
	I1020 13:26:13.692177  509360 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:26:13.695010  509360 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:13.695044  509360 system_pods.go:89] "coredns-66bc5c9577-czxmg" [dfe5480f-3c87-4f50-8890-9aeb8740860b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:13.695054  509360 system_pods.go:89] "etcd-no-preload-744804" [861cd06e-ae97-40a2-94f3-c36f118ae148] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1020 13:26:13.695085  509360 system_pods.go:89] "kindnet-tqpf7" [d65258f0-f2a5-4c71-910b-d148291111ae] Running
	I1020 13:26:13.695093  509360 system_pods.go:89] "kube-apiserver-no-preload-744804" [5045b24e-f1ef-4e65-938c-3999ea03c565] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1020 13:26:13.695107  509360 system_pods.go:89] "kube-controller-manager-no-preload-744804" [f842efbf-e39d-4c96-b2d2-14918e2a33a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1020 13:26:13.695113  509360 system_pods.go:89] "kube-proxy-bv8x8" [835b8b0c-6e21-43be-9656-1e09387eab43] Running
	I1020 13:26:13.695123  509360 system_pods.go:89] "kube-scheduler-no-preload-744804" [469f86bf-dc90-42fe-9d33-901b8c97aabc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1020 13:26:13.695150  509360 system_pods.go:89] "storage-provisioner" [31880320-20a8-4cbe-b5c2-4b1a321c8501] Running
	I1020 13:26:13.695158  509360 system_pods.go:126] duration metric: took 2.975473ms to wait for k8s-apps to be running ...
	I1020 13:26:13.695166  509360 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:26:13.695222  509360 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:26:13.712257  509360 system_svc.go:56] duration metric: took 17.082049ms WaitForService to wait for kubelet
	I1020 13:26:13.712326  509360 kubeadm.go:586] duration metric: took 8.377232821s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:26:13.712470  509360 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:26:13.715579  509360 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:26:13.715611  509360 node_conditions.go:123] node cpu capacity is 2
	I1020 13:26:13.715633  509360 node_conditions.go:105] duration metric: took 3.149548ms to run NodePressure ...
	I1020 13:26:13.715660  509360 start.go:241] waiting for startup goroutines ...
	I1020 13:26:13.715676  509360 start.go:246] waiting for cluster config update ...
	I1020 13:26:13.715687  509360 start.go:255] writing updated cluster config ...
	I1020 13:26:13.715995  509360 ssh_runner.go:195] Run: rm -f paused
	I1020 13:26:13.720268  509360 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:13.723896  509360 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czxmg" in "kube-system" namespace to be "Ready" or be gone ...
	W1020 13:26:15.729519  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:11.798537  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:14.297265  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:17.730066  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:19.730855  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:16.797451  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:18.797547  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:21.298299  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:22.230564  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:24.732349  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:23.797534  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:25.797657  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:27.229557  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:29.230355  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:27.797921  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:30.298267  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:31.729737  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:34.230242  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:32.797268  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:34.797341  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:36.729257  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:38.729839  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:36.801923  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:39.297153  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:41.297259  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:41.229827  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	W1020 13:26:43.233887  509360 pod_ready.go:104] pod "coredns-66bc5c9577-czxmg" is not "Ready", error: <nil>
	I1020 13:26:44.730007  509360 pod_ready.go:94] pod "coredns-66bc5c9577-czxmg" is "Ready"
	I1020 13:26:44.730034  509360 pod_ready.go:86] duration metric: took 31.006109862s for pod "coredns-66bc5c9577-czxmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.733072  509360 pod_ready.go:83] waiting for pod "etcd-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.738152  509360 pod_ready.go:94] pod "etcd-no-preload-744804" is "Ready"
	I1020 13:26:44.738233  509360 pod_ready.go:86] duration metric: took 5.134781ms for pod "etcd-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.741443  509360 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.746704  509360 pod_ready.go:94] pod "kube-apiserver-no-preload-744804" is "Ready"
	I1020 13:26:44.746735  509360 pod_ready.go:86] duration metric: took 5.25909ms for pod "kube-apiserver-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.749173  509360 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:44.927983  509360 pod_ready.go:94] pod "kube-controller-manager-no-preload-744804" is "Ready"
	I1020 13:26:44.928012  509360 pod_ready.go:86] duration metric: took 178.813474ms for pod "kube-controller-manager-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:45.131271  509360 pod_ready.go:83] waiting for pod "kube-proxy-bv8x8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:45.527950  509360 pod_ready.go:94] pod "kube-proxy-bv8x8" is "Ready"
	I1020 13:26:45.527980  509360 pod_ready.go:86] duration metric: took 396.676644ms for pod "kube-proxy-bv8x8" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:45.728068  509360 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:46.128382  509360 pod_ready.go:94] pod "kube-scheduler-no-preload-744804" is "Ready"
	I1020 13:26:46.128451  509360 pod_ready.go:86] duration metric: took 400.353993ms for pod "kube-scheduler-no-preload-744804" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:46.128471  509360 pod_ready.go:40] duration metric: took 32.408134608s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:46.180351  509360 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:26:46.183326  509360 out.go:179] * Done! kubectl is now configured to use "no-preload-744804" cluster and "default" namespace by default
	W1020 13:26:43.297661  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	W1020 13:26:45.298471  506566 node_ready.go:57] node "auto-308474" has "Ready":"False" status (will retry)
	I1020 13:26:47.802167  506566 node_ready.go:49] node "auto-308474" is "Ready"
	I1020 13:26:47.802192  506566 node_ready.go:38] duration metric: took 42.007858391s for node "auto-308474" to be "Ready" ...
	I1020 13:26:47.802204  506566 api_server.go:52] waiting for apiserver process to appear ...
	I1020 13:26:47.802280  506566 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:26:47.822891  506566 api_server.go:72] duration metric: took 43.004648108s to wait for apiserver process to appear ...
	I1020 13:26:47.822914  506566 api_server.go:88] waiting for apiserver healthz status ...
	I1020 13:26:47.822935  506566 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1020 13:26:47.837008  506566 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1020 13:26:47.839393  506566 api_server.go:141] control plane version: v1.34.1
	I1020 13:26:47.839428  506566 api_server.go:131] duration metric: took 16.506805ms to wait for apiserver health ...
	I1020 13:26:47.839438  506566 system_pods.go:43] waiting for kube-system pods to appear ...
	I1020 13:26:47.843525  506566 system_pods.go:59] 8 kube-system pods found
	I1020 13:26:47.843560  506566 system_pods.go:61] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:47.843566  506566 system_pods.go:61] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:47.843572  506566 system_pods.go:61] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:47.843577  506566 system_pods.go:61] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:47.843581  506566 system_pods.go:61] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:47.843585  506566 system_pods.go:61] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:47.843592  506566 system_pods.go:61] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:47.843598  506566 system_pods.go:61] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:47.843605  506566 system_pods.go:74] duration metric: took 4.160203ms to wait for pod list to return data ...
	I1020 13:26:47.843613  506566 default_sa.go:34] waiting for default service account to be created ...
	I1020 13:26:47.849932  506566 default_sa.go:45] found service account: "default"
	I1020 13:26:47.850006  506566 default_sa.go:55] duration metric: took 6.386021ms for default service account to be created ...
	I1020 13:26:47.850031  506566 system_pods.go:116] waiting for k8s-apps to be running ...
	I1020 13:26:47.855227  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:47.855313  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:47.855353  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:47.855393  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:47.855427  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:47.855450  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:47.855481  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:47.855504  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:47.855528  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:47.855580  506566 retry.go:31] will retry after 268.502746ms: missing components: kube-dns
	I1020 13:26:48.129068  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:48.129106  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:48.129114  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:48.129119  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:48.129146  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:48.129155  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:48.129160  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:48.129164  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:48.129172  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:48.129190  506566 retry.go:31] will retry after 257.512221ms: missing components: kube-dns
	I1020 13:26:48.391401  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:48.391443  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1020 13:26:48.391450  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:48.391456  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:48.391461  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:48.391484  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:48.391496  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:48.391500  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:48.391506  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1020 13:26:48.391528  506566 retry.go:31] will retry after 380.497423ms: missing components: kube-dns
	I1020 13:26:48.776652  506566 system_pods.go:86] 8 kube-system pods found
	I1020 13:26:48.776684  506566 system_pods.go:89] "coredns-66bc5c9577-nnvj2" [053c25c0-ff11-4092-ad90-e57f089b7045] Running
	I1020 13:26:48.776692  506566 system_pods.go:89] "etcd-auto-308474" [c5f5fea3-739d-440c-86ee-b0783e4da3ca] Running
	I1020 13:26:48.776696  506566 system_pods.go:89] "kindnet-qxgmz" [06aacca5-9650-4b0e-ad1c-cf55b65c923b] Running
	I1020 13:26:48.776701  506566 system_pods.go:89] "kube-apiserver-auto-308474" [c5f4b4d0-07b2-4649-96f3-ad1e381d1962] Running
	I1020 13:26:48.776706  506566 system_pods.go:89] "kube-controller-manager-auto-308474" [535ace4c-da47-4eec-8cf3-fd55797f6ab8] Running
	I1020 13:26:48.776711  506566 system_pods.go:89] "kube-proxy-c6ssp" [59acc22b-915f-4797-bceb-2fd1ffdbba61] Running
	I1020 13:26:48.776716  506566 system_pods.go:89] "kube-scheduler-auto-308474" [affea3f5-facc-4356-99f4-777b41fef2a8] Running
	I1020 13:26:48.776720  506566 system_pods.go:89] "storage-provisioner" [dbf2a97a-0aaa-449a-9aac-d44b0d70d31d] Running
	I1020 13:26:48.776728  506566 system_pods.go:126] duration metric: took 926.678074ms to wait for k8s-apps to be running ...
	I1020 13:26:48.776740  506566 system_svc.go:44] waiting for kubelet service to be running ....
	I1020 13:26:48.776798  506566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:26:48.790900  506566 system_svc.go:56] duration metric: took 14.150934ms WaitForService to wait for kubelet
	I1020 13:26:48.790935  506566 kubeadm.go:586] duration metric: took 43.972700158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1020 13:26:48.790965  506566 node_conditions.go:102] verifying NodePressure condition ...
	I1020 13:26:48.793935  506566 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1020 13:26:48.794016  506566 node_conditions.go:123] node cpu capacity is 2
	I1020 13:26:48.794034  506566 node_conditions.go:105] duration metric: took 3.06354ms to run NodePressure ...
	I1020 13:26:48.794050  506566 start.go:241] waiting for startup goroutines ...
	I1020 13:26:48.794059  506566 start.go:246] waiting for cluster config update ...
	I1020 13:26:48.794070  506566 start.go:255] writing updated cluster config ...
	I1020 13:26:48.794397  506566 ssh_runner.go:195] Run: rm -f paused
	I1020 13:26:48.798191  506566 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:48.802262  506566 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnvj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.807422  506566 pod_ready.go:94] pod "coredns-66bc5c9577-nnvj2" is "Ready"
	I1020 13:26:48.807453  506566 pod_ready.go:86] duration metric: took 5.163097ms for pod "coredns-66bc5c9577-nnvj2" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.809897  506566 pod_ready.go:83] waiting for pod "etcd-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.815233  506566 pod_ready.go:94] pod "etcd-auto-308474" is "Ready"
	I1020 13:26:48.815260  506566 pod_ready.go:86] duration metric: took 5.334889ms for pod "etcd-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.818088  506566 pod_ready.go:83] waiting for pod "kube-apiserver-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.822744  506566 pod_ready.go:94] pod "kube-apiserver-auto-308474" is "Ready"
	I1020 13:26:48.822821  506566 pod_ready.go:86] duration metric: took 4.707247ms for pod "kube-apiserver-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:48.825516  506566 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:49.203094  506566 pod_ready.go:94] pod "kube-controller-manager-auto-308474" is "Ready"
	I1020 13:26:49.203122  506566 pod_ready.go:86] duration metric: took 377.529952ms for pod "kube-controller-manager-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:49.402321  506566 pod_ready.go:83] waiting for pod "kube-proxy-c6ssp" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:49.802405  506566 pod_ready.go:94] pod "kube-proxy-c6ssp" is "Ready"
	I1020 13:26:49.802431  506566 pod_ready.go:86] duration metric: took 400.084838ms for pod "kube-proxy-c6ssp" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:50.002903  506566 pod_ready.go:83] waiting for pod "kube-scheduler-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:50.402088  506566 pod_ready.go:94] pod "kube-scheduler-auto-308474" is "Ready"
	I1020 13:26:50.402117  506566 pod_ready.go:86] duration metric: took 399.137536ms for pod "kube-scheduler-auto-308474" in "kube-system" namespace to be "Ready" or be gone ...
	I1020 13:26:50.402131  506566 pod_ready.go:40] duration metric: took 1.603906597s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1020 13:26:50.455071  506566 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1020 13:26:50.458595  506566 out.go:179] * Done! kubectl is now configured to use "auto-308474" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.233004709Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=714f57d1-e8c6-45f1-9a31-6d0ca345b927 name=/runtime.v1.ImageService/ImageStatus
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.234585932Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7ab6689f-58fb-48d8-ba3c-1e14fb0dd391 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.234675057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.239694259Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.240012392Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a672f1408ea659b07e63c58a052b894dbbec7aaf62c96876e5d78bd8ff353224/merged/etc/passwd: no such file or directory"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.240113062Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a672f1408ea659b07e63c58a052b894dbbec7aaf62c96876e5d78bd8ff353224/merged/etc/group: no such file or directory"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.240459348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.273778562Z" level=info msg="Created container 11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b: kube-system/storage-provisioner/storage-provisioner" id=7ab6689f-58fb-48d8-ba3c-1e14fb0dd391 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.275491915Z" level=info msg="Starting container: 11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b" id=c3611939-1841-49ef-a1f8-3cc9d4c1d3b0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 20 13:26:43 no-preload-744804 crio[649]: time="2025-10-20T13:26:43.277423198Z" level=info msg="Started container" PID=1636 containerID=11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b description=kube-system/storage-provisioner/storage-provisioner id=c3611939-1841-49ef-a1f8-3cc9d4c1d3b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2e4fcf81f2f292c68bb8a43456cb5ad09f0f67a832f2db89950b78e98a6fa80
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.924722336Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.928904168Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.929086341Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.929503183Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.933049019Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.933175273Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.933256374Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.937866987Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.937894491Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.937913109Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.943933489Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.943965547Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.94398533Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.947686064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 20 13:26:52 no-preload-744804 crio[649]: time="2025-10-20T13:26:52.947719812Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	11cd790594580       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago       Running             storage-provisioner         2                   d2e4fcf81f2f2       storage-provisioner                          kube-system
	8d27bd77cd846       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago       Exited              dashboard-metrics-scraper   2                   e6ed4649b8632       dashboard-metrics-scraper-6ffb444bf9-fxdmg   kubernetes-dashboard
	cc04991aeb9ac       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   87b227dab67a6       kubernetes-dashboard-855c9754f9-4sq6t        kubernetes-dashboard
	886a327b32a4b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   52899b32a22a6       coredns-66bc5c9577-czxmg                     kube-system
	31f05cb96945b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   a08519cbd1aab       kindnet-tqpf7                                kube-system
	38c23aad4fa88       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   cb84d4b6dba6a       kube-proxy-bv8x8                             kube-system
	6a30d7df87c11       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   8c590a4ec7099       busybox                                      default
	9a3ab7492a03c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago       Exited              storage-provisioner         1                   d2e4fcf81f2f2       storage-provisioner                          kube-system
	8f15a98da5f33       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   84c09336907bf       kube-scheduler-no-preload-744804             kube-system
	6e11ee6379c80       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   13c3c6008f660       etcd-no-preload-744804                       kube-system
	1c3907b84b271       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1157b1f3c1a64       kube-apiserver-no-preload-744804             kube-system
	4f36e401d485e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9d641ae1c2e52       kube-controller-manager-no-preload-744804    kube-system
	
	
	==> coredns [886a327b32a4bf69e26cd65a10e8e6b11c1d668342dc5a21d9f727e71375f98b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57509 - 57690 "HINFO IN 3550779046318444203.7077144803148319311. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035534578s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-744804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-744804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=883187b91f6c4487786774166ddb1e5a14f03fb6
	                    minikube.k8s.io/name=no-preload-744804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_20T13_24_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 Oct 2025 13:24:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-744804
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 Oct 2025 13:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:24:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 Oct 2025 13:26:42 +0000   Mon, 20 Oct 2025 13:25:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-744804
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e6ebf1aa-cf6a-460e-af7e-a66b26d17d7c
	  Boot ID:                    902facbd-ef25-4705-96c1-369daf844309
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-czxmg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m38s
	  kube-system                 etcd-no-preload-744804                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m43s
	  kube-system                 kindnet-tqpf7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m39s
	  kube-system                 kube-apiserver-no-preload-744804              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m45s
	  kube-system                 kube-controller-manager-no-preload-744804     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 kube-proxy-bv8x8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  kube-system                 kube-scheduler-no-preload-744804              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fxdmg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4sq6t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m36s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m55s (x8 over 2m55s)  kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m55s (x8 over 2m55s)  kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m55s (x8 over 2m55s)  kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m44s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m43s                  kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m43s                  kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m43s                  kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m40s                  node-controller  Node no-preload-744804 event: Registered Node no-preload-744804 in Controller
	  Normal   NodeReady                100s                   kubelet          Node no-preload-744804 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 62s)      kubelet          Node no-preload-744804 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 62s)      kubelet          Node no-preload-744804 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 62s)      kubelet          Node no-preload-744804 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node no-preload-744804 event: Registered Node no-preload-744804 in Controller
	
	
	==> dmesg <==
	[Oct20 13:04] overlayfs: idmapped layers are currently not supported
	[ +40.317814] overlayfs: idmapped layers are currently not supported
	[Oct20 13:05] overlayfs: idmapped layers are currently not supported
	[Oct20 13:06] overlayfs: idmapped layers are currently not supported
	[Oct20 13:07] overlayfs: idmapped layers are currently not supported
	[Oct20 13:09] overlayfs: idmapped layers are currently not supported
	[  +2.977130] overlayfs: idmapped layers are currently not supported
	[Oct20 13:11] overlayfs: idmapped layers are currently not supported
	[ +28.876186] overlayfs: idmapped layers are currently not supported
	[Oct20 13:14] overlayfs: idmapped layers are currently not supported
	[Oct20 13:15] overlayfs: idmapped layers are currently not supported
	[Oct20 13:16] overlayfs: idmapped layers are currently not supported
	[Oct20 13:17] overlayfs: idmapped layers are currently not supported
	[ +36.686848] overlayfs: idmapped layers are currently not supported
	[Oct20 13:19] overlayfs: idmapped layers are currently not supported
	[Oct20 13:20] overlayfs: idmapped layers are currently not supported
	[Oct20 13:21] overlayfs: idmapped layers are currently not supported
	[Oct20 13:22] overlayfs: idmapped layers are currently not supported
	[Oct20 13:23] overlayfs: idmapped layers are currently not supported
	[ +43.225983] overlayfs: idmapped layers are currently not supported
	[Oct20 13:24] overlayfs: idmapped layers are currently not supported
	[Oct20 13:25] overlayfs: idmapped layers are currently not supported
	[ +42.548676] overlayfs: idmapped layers are currently not supported
	[Oct20 13:26] overlayfs: idmapped layers are currently not supported
	[Oct20 13:27] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [6e11ee6379c8057195df4b7174497050554e2746585cffbcff5d6ee674caccd2] <==
	{"level":"warn","ts":"2025-10-20T13:26:09.657465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.695008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.724884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.750529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.785651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.855265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.858674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.897038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.917230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.962244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:09.983057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.001641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.020478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.040872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.067584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.084866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.113434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.139732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.157222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.175626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.190865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.224908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.245132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.261455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-20T13:26:10.381901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32796","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:27:05 up  3:09,  0 user,  load average: 2.78, 2.94, 2.63
	Linux no-preload-744804 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [31f05cb96945b51652801d40a5cb2c12ac111770818e466dcbcbef7e5df312b3] <==
	I1020 13:26:12.730803       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1020 13:26:12.731253       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1020 13:26:12.731431       1 main.go:148] setting mtu 1500 for CNI 
	I1020 13:26:12.731444       1 main.go:178] kindnetd IP family: "ipv4"
	I1020 13:26:12.731458       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-20T13:26:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1020 13:26:12.919799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1020 13:26:12.925180       1 controller.go:381] "Waiting for informer caches to sync"
	I1020 13:26:12.925274       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1020 13:26:12.926308       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1020 13:26:42.919946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1020 13:26:42.926550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1020 13:26:42.926550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1020 13:26:42.926645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1020 13:26:44.525971       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1020 13:26:44.526137       1 metrics.go:72] Registering metrics
	I1020 13:26:44.526212       1 controller.go:711] "Syncing nftables rules"
	I1020 13:26:52.924452       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:26:52.924501       1 main.go:301] handling current node
	I1020 13:27:02.928434       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1020 13:27:02.928469       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1c3907b84b2719c834370b3a234bfcf74dccb4f164f5f6e62b92590abdba5b57] <==
	I1020 13:26:11.399777       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1020 13:26:11.400206       1 aggregator.go:171] initial CRD sync complete...
	I1020 13:26:11.400227       1 autoregister_controller.go:144] Starting autoregister controller
	I1020 13:26:11.400236       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1020 13:26:11.400242       1 cache.go:39] Caches are synced for autoregister controller
	I1020 13:26:11.439875       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1020 13:26:11.459222       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1020 13:26:11.459893       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1020 13:26:11.467952       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1020 13:26:11.474878       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1020 13:26:11.474922       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1020 13:26:11.474938       1 policy_source.go:240] refreshing policies
	I1020 13:26:11.486817       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1020 13:26:11.516848       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1020 13:26:11.929992       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1020 13:26:12.067373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1020 13:26:12.103381       1 controller.go:667] quota admission added evaluator for: namespaces
	I1020 13:26:12.213420       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1020 13:26:12.357322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1020 13:26:12.420905       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1020 13:26:12.671000       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.1.0"}
	I1020 13:26:12.833773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.154.136"}
	I1020 13:26:15.598382       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1020 13:26:15.844573       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1020 13:26:16.044741       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [4f36e401d485e4f4d90833026e33ea3530d32bdd15cccc9487bf620da50270af] <==
	I1020 13:26:15.596730       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:26:15.599253       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1020 13:26:15.604847       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1020 13:26:15.608245       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1020 13:26:15.610906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1020 13:26:15.614153       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1020 13:26:15.618357       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1020 13:26:15.622676       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1020 13:26:15.628869       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:26:15.637596       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1020 13:26:15.637704       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1020 13:26:15.637783       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1020 13:26:15.637856       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1020 13:26:15.638982       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1020 13:26:15.639083       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1020 13:26:15.639178       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1020 13:26:15.639247       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-744804"
	I1020 13:26:15.639291       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1020 13:26:15.647759       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1020 13:26:15.648987       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1020 13:26:15.649008       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1020 13:26:15.649016       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1020 13:26:15.653767       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1020 13:26:15.660746       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1020 13:26:15.662331       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	
	
	==> kube-proxy [38c23aad4fa887459e239041be46dccc58a99edeb50d18acfa6a539f90c4f00e] <==
	I1020 13:26:13.146495       1 server_linux.go:53] "Using iptables proxy"
	I1020 13:26:13.254386       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1020 13:26:13.354982       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1020 13:26:13.355026       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1020 13:26:13.355118       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1020 13:26:13.375160       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1020 13:26:13.375285       1 server_linux.go:132] "Using iptables Proxier"
	I1020 13:26:13.379716       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1020 13:26:13.380149       1 server.go:527] "Version info" version="v1.34.1"
	I1020 13:26:13.380920       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:26:13.382276       1 config.go:200] "Starting service config controller"
	I1020 13:26:13.382348       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1020 13:26:13.382392       1 config.go:106] "Starting endpoint slice config controller"
	I1020 13:26:13.382440       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1020 13:26:13.382478       1 config.go:403] "Starting serviceCIDR config controller"
	I1020 13:26:13.382512       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1020 13:26:13.383153       1 config.go:309] "Starting node config controller"
	I1020 13:26:13.385951       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1020 13:26:13.386027       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1020 13:26:13.482728       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1020 13:26:13.482747       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1020 13:26:13.482765       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8f15a98da5f338160fc0802f3aac18ef56c3a8ac8e7f0d8a95b82a15d0cbfba5] <==
	I1020 13:26:09.529071       1 serving.go:386] Generated self-signed cert in-memory
	I1020 13:26:12.943125       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1020 13:26:12.953692       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1020 13:26:12.967840       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1020 13:26:12.968043       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1020 13:26:12.968103       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1020 13:26:12.968154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1020 13:26:12.989019       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:26:13.005224       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1020 13:26:13.005277       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:26:13.005285       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:26:13.068702       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1020 13:26:13.108481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1020 13:26:13.108663       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 20 13:26:11 no-preload-744804 kubelet[765]: I1020 13:26:11.972133     765 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 20 13:26:14 no-preload-744804 kubelet[765]: I1020 13:26:14.349785     765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257406     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4sq6t\" (UID: \"7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4sq6t"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257468     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4cc\" (UniqueName: \"kubernetes.io/projected/e346a326-7591-4c13-9ccb-72ebc2cfac5f-kube-api-access-5m4cc\") pod \"dashboard-metrics-scraper-6ffb444bf9-fxdmg\" (UID: \"e346a326-7591-4c13-9ccb-72ebc2cfac5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257493     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e346a326-7591-4c13-9ccb-72ebc2cfac5f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-fxdmg\" (UID: \"e346a326-7591-4c13-9ccb-72ebc2cfac5f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: I1020 13:26:16.257519     765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgk5\" (UniqueName: \"kubernetes.io/projected/7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4-kube-api-access-nzgk5\") pod \"kubernetes-dashboard-855c9754f9-4sq6t\" (UID: \"7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4sq6t"
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: W1020 13:26:16.483960     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/crio-e6ed4649b863223f775a5d9a56678a2e4ca5ebed105704e9ad61fff7216131d0 WatchSource:0}: Error finding container e6ed4649b863223f775a5d9a56678a2e4ca5ebed105704e9ad61fff7216131d0: Status 404 returned error can't find the container with id e6ed4649b863223f775a5d9a56678a2e4ca5ebed105704e9ad61fff7216131d0
	Oct 20 13:26:16 no-preload-744804 kubelet[765]: W1020 13:26:16.497321     765 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7c7d00bb470e02511727af4602ded56830c69d2eea0b132fe8bd13dd9f3c5a41/crio-87b227dab67a6f6cd3452be431dd9520c40faa66a8e8d3479b80f6a5264ea53c WatchSource:0}: Error finding container 87b227dab67a6f6cd3452be431dd9520c40faa66a8e8d3479b80f6a5264ea53c: Status 404 returned error can't find the container with id 87b227dab67a6f6cd3452be431dd9520c40faa66a8e8d3479b80f6a5264ea53c
	Oct 20 13:26:22 no-preload-744804 kubelet[765]: I1020 13:26:22.162538     765 scope.go:117] "RemoveContainer" containerID="e69010e6eec4c2a27f00170e8567ce8afc21a6f51862cc2741daf49aeefd2507"
	Oct 20 13:26:23 no-preload-744804 kubelet[765]: I1020 13:26:23.168409     765 scope.go:117] "RemoveContainer" containerID="e69010e6eec4c2a27f00170e8567ce8afc21a6f51862cc2741daf49aeefd2507"
	Oct 20 13:26:23 no-preload-744804 kubelet[765]: I1020 13:26:23.168736     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:23 no-preload-744804 kubelet[765]: E1020 13:26:23.168908     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:26 no-preload-744804 kubelet[765]: I1020 13:26:26.449526     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:26 no-preload-744804 kubelet[765]: E1020 13:26:26.449715     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:26 no-preload-744804 kubelet[765]: I1020 13:26:26.463328     765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4sq6t" podStartSLOduration=1.160297869 podStartE2EDuration="10.463306824s" podCreationTimestamp="2025-10-20 13:26:16 +0000 UTC" firstStartedPulling="2025-10-20 13:26:16.500749794 +0000 UTC m=+12.726717183" lastFinishedPulling="2025-10-20 13:26:25.803758733 +0000 UTC m=+22.029726138" observedRunningTime="2025-10-20 13:26:26.223859403 +0000 UTC m=+22.449826800" watchObservedRunningTime="2025-10-20 13:26:26.463306824 +0000 UTC m=+22.689274221"
	Oct 20 13:26:36 no-preload-744804 kubelet[765]: I1020 13:26:36.994111     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:37 no-preload-744804 kubelet[765]: I1020 13:26:37.212273     765 scope.go:117] "RemoveContainer" containerID="f90384fe9f3b867df76db42baafab69de9e491a3ce27ed7cd5853f88eff595cb"
	Oct 20 13:26:37 no-preload-744804 kubelet[765]: I1020 13:26:37.212585     765 scope.go:117] "RemoveContainer" containerID="8d27bd77cd846ddebfc1d2a5c08dc83af7d745f6da25670e469bb37556ffcac2"
	Oct 20 13:26:37 no-preload-744804 kubelet[765]: E1020 13:26:37.212750     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:43 no-preload-744804 kubelet[765]: I1020 13:26:43.229927     765 scope.go:117] "RemoveContainer" containerID="9a3ab7492a03c361b566afcb199e4bae1397925004be8ee7d219da31312fd02b"
	Oct 20 13:26:46 no-preload-744804 kubelet[765]: I1020 13:26:46.449084     765 scope.go:117] "RemoveContainer" containerID="8d27bd77cd846ddebfc1d2a5c08dc83af7d745f6da25670e469bb37556ffcac2"
	Oct 20 13:26:46 no-preload-744804 kubelet[765]: E1020 13:26:46.450633     765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fxdmg_kubernetes-dashboard(e346a326-7591-4c13-9ccb-72ebc2cfac5f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fxdmg" podUID="e346a326-7591-4c13-9ccb-72ebc2cfac5f"
	Oct 20 13:26:58 no-preload-744804 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 20 13:26:58 no-preload-744804 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 20 13:26:58 no-preload-744804 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [cc04991aeb9acd4eb11bb78237f8d40eb0cbc8fd30f87da44618819c62b1650e] <==
	2025/10/20 13:26:25 Starting overwatch
	2025/10/20 13:26:25 Using namespace: kubernetes-dashboard
	2025/10/20 13:26:25 Using in-cluster config to connect to apiserver
	2025/10/20 13:26:25 Using secret token for csrf signing
	2025/10/20 13:26:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/20 13:26:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/20 13:26:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/20 13:26:25 Generating JWE encryption key
	2025/10/20 13:26:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/20 13:26:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/20 13:26:26 Initializing JWE encryption key from synchronized object
	2025/10/20 13:26:26 Creating in-cluster Sidecar client
	2025/10/20 13:26:26 Serving insecurely on HTTP port: 9090
	2025/10/20 13:26:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/20 13:26:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [11cd79059458090494bce3edb8ace8fc98e8ee85dc4ffc30fc6a11f6013de07b] <==
	I1020 13:26:43.291871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1020 13:26:43.304034       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1020 13:26:43.304088       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1020 13:26:43.314252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:46.769553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:51.038639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:54.638415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:26:57.691734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:00.713961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:00.719436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:27:00.719588       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1020 13:27:00.719911       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"570f9e7e-e0b0-42c3-8be9-6674d6360b18", APIVersion:"v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-744804_71705cee-8782-461d-a060-ca10a8117718 became leader
	I1020 13:27:00.719938       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-744804_71705cee-8782-461d-a060-ca10a8117718!
	W1020 13:27:00.726466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:00.742675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1020 13:27:00.820621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-744804_71705cee-8782-461d-a060-ca10a8117718!
	W1020 13:27:02.749729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:02.754820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:04.760650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1020 13:27:04.776739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [9a3ab7492a03c361b566afcb199e4bae1397925004be8ee7d219da31312fd02b] <==
	I1020 13:26:13.091393       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1020 13:26:43.108555       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-744804 -n no-preload-744804
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-744804 -n no-preload-744804: exit status 2 (359.39099ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-744804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.06s)

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.64
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 6.09
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 172.02
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.79
48 TestAddons/StoppedEnableDisable 12.48
49 TestCertOptions 36.57
50 TestCertExpiration 343.93
52 TestForceSystemdFlag 40.49
53 TestForceSystemdEnv 39.53
59 TestErrorSpam/setup 34.63
60 TestErrorSpam/start 0.76
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 6.35
63 TestErrorSpam/unpause 6.06
64 TestErrorSpam/stop 1.52
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 81.01
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 26.79
71 TestFunctional/serial/KubeContext 0.08
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.61
76 TestFunctional/serial/CacheCmd/cache/add_local 1.15
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 55.61
85 TestFunctional/serial/ComponentHealth 0.09
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.47
88 TestFunctional/serial/InvalidService 4.07
90 TestFunctional/parallel/ConfigCmd 0.51
91 TestFunctional/parallel/DashboardCmd 12.27
92 TestFunctional/parallel/DryRun 0.59
93 TestFunctional/parallel/InternationalLanguage 0.3
94 TestFunctional/parallel/StatusCmd 1.1
99 TestFunctional/parallel/AddonsCmd 0.21
100 TestFunctional/parallel/PersistentVolumeClaim 25.78
102 TestFunctional/parallel/SSHCmd 0.78
103 TestFunctional/parallel/CpCmd 2.27
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.68
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
114 TestFunctional/parallel/License 0.41
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
128 TestFunctional/parallel/ProfileCmd/profile_list 0.42
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 6.97
131 TestFunctional/parallel/MountCmd/specific-port 1.89
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.91
133 TestFunctional/parallel/ServiceCmd/List 0.64
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
138 TestFunctional/parallel/Version/short 0.05
139 TestFunctional/parallel/Version/components 1.15
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
145 TestFunctional/parallel/ImageCommands/Setup 0.83
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.36
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 205.21
164 TestMultiControlPlane/serial/DeployApp 37.18
165 TestMultiControlPlane/serial/PingHostFromPods 1.53
166 TestMultiControlPlane/serial/AddWorkerNode 61.13
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
169 TestMultiControlPlane/serial/CopyFile 20.03
170 TestMultiControlPlane/serial/StopSecondaryNode 12.97
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
172 TestMultiControlPlane/serial/RestartSecondaryNode 28.07
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.29
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 122.47
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.64
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
177 TestMultiControlPlane/serial/StopCluster 36.47
178 TestMultiControlPlane/serial/RestartCluster 166.31
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
180 TestMultiControlPlane/serial/AddSecondaryNode 85.09
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
185 TestJSONOutput/start/Command 80.32
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.26
210 TestKicCustomNetwork/create_custom_network 41.42
211 TestKicCustomNetwork/use_default_bridge_network 39.47
212 TestKicExistingNetwork 35.75
213 TestKicCustomSubnet 34.51
214 TestKicStaticIP 37.69
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 72.63
219 TestMountStart/serial/StartWithMountFirst 9.02
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.29
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.84
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 139.23
231 TestMultiNode/serial/DeployApp2Nodes 5.32
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 59.47
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.66
237 TestMultiNode/serial/StopNode 2.49
238 TestMultiNode/serial/StartAfterStop 8.86
239 TestMultiNode/serial/RestartKeepsNodes 75.76
240 TestMultiNode/serial/DeleteNode 5.53
241 TestMultiNode/serial/StopMultiNode 24.02
242 TestMultiNode/serial/RestartMultiNode 52.01
243 TestMultiNode/serial/ValidateNameConflict 33.22
248 TestPreload 126.5
250 TestScheduledStopUnix 109.33
253 TestInsufficientStorage 13.66
254 TestRunningBinaryUpgrade 53.91
256 TestKubernetesUpgrade 365.99
257 TestMissingContainerUpgrade 116.46
259 TestPause/serial/Start 62.05
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
262 TestNoKubernetes/serial/StartWithK8s 43.69
263 TestNoKubernetes/serial/StartWithStopK8s 7.68
264 TestNoKubernetes/serial/Start 9.09
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
266 TestNoKubernetes/serial/ProfileList 1.2
267 TestPause/serial/SecondStartNoReconfiguration 21.97
268 TestNoKubernetes/serial/Stop 1.41
269 TestNoKubernetes/serial/StartNoArgs 7.75
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
272 TestStoppedBinaryUpgrade/Setup 0.68
273 TestStoppedBinaryUpgrade/Upgrade 54.36
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
289 TestNetworkPlugins/group/false 3.67
294 TestStartStop/group/old-k8s-version/serial/FirstStart 85.96
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.37
297 TestStartStop/group/old-k8s-version/serial/Stop 12.01
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 47.12
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.96
307 TestStartStop/group/embed-certs/serial/FirstStart 51.69
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.47
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.27
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.12
313 TestStartStop/group/embed-certs/serial/DeployApp 10.49
315 TestStartStop/group/embed-certs/serial/Stop 12.62
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
317 TestStartStop/group/embed-certs/serial/SecondStart 58.19
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
323 TestStartStop/group/no-preload/serial/FirstStart 114.6
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.14
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
329 TestStartStop/group/newest-cni/serial/FirstStart 42.44
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.49
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
334 TestStartStop/group/newest-cni/serial/SecondStart 17.21
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
339 TestNetworkPlugins/group/auto/Start 83.96
340 TestStartStop/group/no-preload/serial/DeployApp 11.41
342 TestStartStop/group/no-preload/serial/Stop 12.26
343 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.3
344 TestStartStop/group/no-preload/serial/SecondStart 50.72
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
346 TestNetworkPlugins/group/auto/KubeletFlags 0.32
347 TestNetworkPlugins/group/auto/NetCatPod 10.3
348 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
349 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
351 TestNetworkPlugins/group/auto/DNS 0.32
352 TestNetworkPlugins/group/auto/Localhost 0.26
353 TestNetworkPlugins/group/auto/HairPin 0.22
354 TestNetworkPlugins/group/kindnet/Start 85.47
355 TestNetworkPlugins/group/calico/Start 68.71
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
359 TestNetworkPlugins/group/kindnet/NetCatPod 13.3
360 TestNetworkPlugins/group/calico/KubeletFlags 0.4
361 TestNetworkPlugins/group/calico/NetCatPod 12.47
362 TestNetworkPlugins/group/calico/DNS 0.2
363 TestNetworkPlugins/group/kindnet/DNS 0.25
364 TestNetworkPlugins/group/calico/Localhost 0.18
365 TestNetworkPlugins/group/kindnet/Localhost 0.19
366 TestNetworkPlugins/group/calico/HairPin 0.17
367 TestNetworkPlugins/group/kindnet/HairPin 0.21
368 TestNetworkPlugins/group/custom-flannel/Start 71.55
369 TestNetworkPlugins/group/enable-default-cni/Start 82.55
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
372 TestNetworkPlugins/group/custom-flannel/DNS 0.16
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.38
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
380 TestNetworkPlugins/group/flannel/Start 73.86
381 TestNetworkPlugins/group/bridge/Start 89.97
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
384 TestNetworkPlugins/group/flannel/NetCatPod 10.28
385 TestNetworkPlugins/group/flannel/DNS 0.16
386 TestNetworkPlugins/group/flannel/Localhost 0.17
387 TestNetworkPlugins/group/flannel/HairPin 0.14
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
389 TestNetworkPlugins/group/bridge/NetCatPod 12.35
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-509805 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-509805 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.639544605s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1020 12:16:54.645660  298259 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1020 12:16:54.645749  298259 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-509805
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-509805: exit status 85 (93.713244ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-509805 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-509805 │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:16:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:16:49.045586  298265 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:16:49.045703  298265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:16:49.045711  298265 out.go:374] Setting ErrFile to fd 2...
	I1020 12:16:49.045717  298265 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:16:49.045957  298265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	W1020 12:16:49.046101  298265 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21773-296391/.minikube/config/config.json: open /home/jenkins/minikube-integration/21773-296391/.minikube/config/config.json: no such file or directory
	I1020 12:16:49.046515  298265 out.go:368] Setting JSON to true
	I1020 12:16:49.047354  298265 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7159,"bootTime":1760955450,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 12:16:49.047418  298265 start.go:141] virtualization:  
	I1020 12:16:49.051431  298265 out.go:99] [download-only-509805] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1020 12:16:49.051633  298265 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball: no such file or directory
	I1020 12:16:49.051754  298265 notify.go:220] Checking for updates...
	I1020 12:16:49.055526  298265 out.go:171] MINIKUBE_LOCATION=21773
	I1020 12:16:49.058526  298265 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:16:49.061418  298265 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:16:49.064497  298265 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 12:16:49.067514  298265 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1020 12:16:49.073161  298265 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1020 12:16:49.073448  298265 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:16:49.104150  298265 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 12:16:49.104263  298265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:16:49.161000  298265 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-20 12:16:49.152189821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:16:49.161116  298265 docker.go:318] overlay module found
	I1020 12:16:49.164207  298265 out.go:99] Using the docker driver based on user configuration
	I1020 12:16:49.164247  298265 start.go:305] selected driver: docker
	I1020 12:16:49.164268  298265 start.go:925] validating driver "docker" against <nil>
	I1020 12:16:49.164506  298265 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:16:49.218589  298265 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-20 12:16:49.209675731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:16:49.218737  298265 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:16:49.219006  298265 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1020 12:16:49.219171  298265 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 12:16:49.222275  298265 out.go:171] Using Docker driver with root privileges
	I1020 12:16:49.225402  298265 cni.go:84] Creating CNI manager for ""
	I1020 12:16:49.225479  298265 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:16:49.225494  298265 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:16:49.225594  298265 start.go:349] cluster config:
	{Name:download-only-509805 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-509805 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:16:49.228527  298265 out.go:99] Starting "download-only-509805" primary control-plane node in "download-only-509805" cluster
	I1020 12:16:49.228562  298265 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:16:49.231486  298265 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:16:49.231537  298265 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 12:16:49.231702  298265 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:16:49.247073  298265 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 12:16:49.247275  298265 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1020 12:16:49.247380  298265 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 12:16:49.284330  298265 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1020 12:16:49.284378  298265 cache.go:58] Caching tarball of preloaded images
	I1020 12:16:49.284537  298265 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 12:16:49.287712  298265 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1020 12:16:49.287747  298265 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1020 12:16:49.379117  298265 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1020 12:16:49.379285  298265 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1020 12:16:53.965718  298265 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1020 12:16:53.966127  298265 profile.go:143] Saving config to /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/download-only-509805/config.json ...
	I1020 12:16:53.966178  298265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/download-only-509805/config.json: {Name:mkec9d42f305e1503eea0039dd7d46375b19a59f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1020 12:16:53.966391  298265 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1020 12:16:53.966604  298265 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-509805 host does not exist
	  To start a cluster, run: "minikube start -p download-only-509805"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-509805
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (6.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-029467 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-029467 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.089990928s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (6.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1020 12:17:01.198426  298259 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1020 12:17:01.198469  298259 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-029467
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-029467: exit status 85 (100.358883ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-509805 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-509805 │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │ 20 Oct 25 12:16 UTC │
	│ delete  │ -p download-only-509805                                                                                                                                                   │ download-only-509805 │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │ 20 Oct 25 12:16 UTC │
	│ start   │ -o=json --download-only -p download-only-029467 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-029467 │ jenkins │ v1.37.0 │ 20 Oct 25 12:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/20 12:16:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1020 12:16:55.153925  298465 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:16:55.154165  298465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:16:55.154193  298465 out.go:374] Setting ErrFile to fd 2...
	I1020 12:16:55.154213  298465 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:16:55.154540  298465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:16:55.155070  298465 out.go:368] Setting JSON to true
	I1020 12:16:55.156027  298465 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7166,"bootTime":1760955450,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 12:16:55.156139  298465 start.go:141] virtualization:  
	I1020 12:16:55.159693  298465 out.go:99] [download-only-029467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 12:16:55.159986  298465 notify.go:220] Checking for updates...
	I1020 12:16:55.162951  298465 out.go:171] MINIKUBE_LOCATION=21773
	I1020 12:16:55.166147  298465 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:16:55.169205  298465 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:16:55.172190  298465 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 12:16:55.175360  298465 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1020 12:16:55.181302  298465 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1020 12:16:55.181614  298465 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:16:55.205798  298465 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 12:16:55.205924  298465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:16:55.264616  298465 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-20 12:16:55.254906706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:16:55.264735  298465 docker.go:318] overlay module found
	I1020 12:16:55.267693  298465 out.go:99] Using the docker driver based on user configuration
	I1020 12:16:55.267731  298465 start.go:305] selected driver: docker
	I1020 12:16:55.267742  298465 start.go:925] validating driver "docker" against <nil>
	I1020 12:16:55.267847  298465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:16:55.320143  298465 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-20 12:16:55.311505656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:16:55.320304  298465 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1020 12:16:55.320600  298465 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1020 12:16:55.320758  298465 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1020 12:16:55.323886  298465 out.go:171] Using Docker driver with root privileges
	I1020 12:16:55.326763  298465 cni.go:84] Creating CNI manager for ""
	I1020 12:16:55.326834  298465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1020 12:16:55.326849  298465 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1020 12:16:55.326951  298465 start.go:349] cluster config:
	{Name:download-only-029467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-029467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:16:55.329879  298465 out.go:99] Starting "download-only-029467" primary control-plane node in "download-only-029467" cluster
	I1020 12:16:55.329904  298465 cache.go:123] Beginning downloading kic base image for docker with crio
	I1020 12:16:55.332742  298465 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1020 12:16:55.332772  298465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:16:55.332835  298465 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1020 12:16:55.348497  298465 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1020 12:16:55.348628  298465 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1020 12:16:55.348653  298465 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1020 12:16:55.348659  298465 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1020 12:16:55.348666  298465 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1020 12:16:55.376817  298465 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1020 12:16:55.376843  298465 cache.go:58] Caching tarball of preloaded images
	I1020 12:16:55.377007  298465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1020 12:16:55.380118  298465 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1020 12:16:55.380144  298465 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1020 12:16:55.459008  298465 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1020 12:16:55.459066  298465 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21773-296391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-029467 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029467"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-029467
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1020 12:17:02.413218  298259 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-776162 --alsologtostderr --binary-mirror http://127.0.0.1:35451 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-776162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-776162
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-399470
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-399470: exit status 85 (75.287493ms)

                                                
                                                
-- stdout --
	* Profile "addons-399470" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-399470"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-399470
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-399470: exit status 85 (79.997516ms)

                                                
                                                
-- stdout --
	* Profile "addons-399470" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-399470"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (172.02s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-399470 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-399470 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.023944401s)
--- PASS: TestAddons/Setup (172.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-399470 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-399470 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.79s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-399470 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-399470 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [be8fcf68-f08d-4336-8e61-4abda92125cd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [be8fcf68-f08d-4336-8e61-4abda92125cd] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004043376s
addons_test.go:694: (dbg) Run:  kubectl --context addons-399470 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-399470 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-399470 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-399470 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-399470
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-399470: (12.174884799s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-399470
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-399470
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-399470
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (36.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-123220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1020 13:17:16.729451  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-123220 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.762279613s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-123220 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-123220 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-123220 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-123220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-123220
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-123220: (2.101582749s)
--- PASS: TestCertOptions (36.57s)

                                                
                                    
x
+
TestCertExpiration (343.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-066011 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-066011 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.74460352s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1020 13:19:56.300280  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-066011 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (2m1.665656768s)
helpers_test.go:175: Cleaning up "cert-expiration-066011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-066011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-066011: (2.519263643s)
--- PASS: TestCertExpiration (343.93s)

                                                
                                    
x
+
TestForceSystemdFlag (40.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-288536 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1020 13:14:39.374902  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-288536 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.618825369s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-288536 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
E1020 13:14:56.299831  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:175: Cleaning up "force-systemd-flag-288536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-288536
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-288536: (2.54637383s)
--- PASS: TestForceSystemdFlag (40.49s)

                                                
                                    
x
+
TestForceSystemdEnv (39.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-534257 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1020 13:15:19.804186  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-534257 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.041337364s)
helpers_test.go:175: Cleaning up "force-systemd-env-534257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-534257
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-534257: (2.484928222s)
--- PASS: TestForceSystemdEnv (39.53s)

                                                
                                    
x
+
TestErrorSpam/setup (34.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-194997 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-194997 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-194997 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-194997 --driver=docker  --container-runtime=crio: (34.633862956s)
--- PASS: TestErrorSpam/setup (34.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (6.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause: exit status 80 (1.634053598s)

                                                
                                                
-- stdout --
	* Pausing node nospam-194997 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:23:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause: exit status 80 (2.487807152s)

                                                
                                                
-- stdout --
	* Pausing node nospam-194997 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:24:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause: exit status 80 (2.223076885s)

                                                
                                                
-- stdout --
	* Pausing node nospam-194997 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:24:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.06s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause: exit status 80 (1.966615828s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-194997 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:24:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause: exit status 80 (1.847910798s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-194997 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:24:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause: exit status 80 (2.246450467s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-194997 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-20T12:24:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.06s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 stop: (1.315931595s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-194997 --log_dir /tmp/nospam-194997 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21773-296391/.minikube/files/etc/test/nested/copy/298259/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-749689 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1020 12:24:56.304455  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:56.310815  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:56.322185  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:56.344573  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:56.385978  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:56.467490  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:56.629010  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:56.950761  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:57.592877  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:24:58.874912  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:25:01.436707  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:25:06.559562  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:25:16.801097  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:25:37.283082  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-749689 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.009041311s)
--- PASS: TestFunctional/serial/StartWithProxy (81.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1020 12:25:37.662006  298259 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-749689 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-749689 --alsologtostderr -v=8: (26.787326788s)
functional_test.go:678: soft start took 26.790768736s for "functional-749689" cluster.
I1020 12:26:04.449731  298259 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (26.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-749689 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 cache add registry.k8s.io/pause:3.1: (1.235225901s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 cache add registry.k8s.io/pause:3.3: (1.146134297s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 cache add registry.k8s.io/pause:latest: (1.23012077s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-749689 /tmp/TestFunctionalserialCacheCmdcacheadd_local3518155770/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cache add minikube-local-cache-test:functional-749689
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cache delete minikube-local-cache-test:functional-749689
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-749689
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.711073ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 kubectl -- --context functional-749689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-749689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (55.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-749689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1020 12:26:18.245635  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-749689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (55.609750717s)
functional_test.go:776: restart took 55.609853045s for "functional-749689" cluster.
I1020 12:27:07.673142  298259 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (55.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-749689 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 logs: (1.450290973s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 logs --file /tmp/TestFunctionalserialLogsFileCmd2038504892/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 logs --file /tmp/TestFunctionalserialLogsFileCmd2038504892/001/logs.txt: (1.470577573s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-749689 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-749689
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-749689: exit status 115 (378.116398ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31852 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-749689 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 config get cpus: exit status 14 (70.707082ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 config get cpus: exit status 14 (96.508444ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-749689 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-749689 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 324875: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-749689 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-749689 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (233.939216ms)

                                                
                                                
-- stdout --
	* [functional-749689] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:37:43.686747  324347 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:37:43.686870  324347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:37:43.686880  324347 out.go:374] Setting ErrFile to fd 2...
	I1020 12:37:43.686886  324347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:37:43.687247  324347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:37:43.688140  324347 out.go:368] Setting JSON to false
	I1020 12:37:43.689263  324347 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8414,"bootTime":1760955450,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 12:37:43.689356  324347 start.go:141] virtualization:  
	I1020 12:37:43.692670  324347 out.go:179] * [functional-749689] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 12:37:43.696639  324347 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:37:43.696717  324347 notify.go:220] Checking for updates...
	I1020 12:37:43.702802  324347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:37:43.708613  324347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:37:43.715931  324347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 12:37:43.718955  324347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 12:37:43.721662  324347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:37:43.726007  324347 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:37:43.726616  324347 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:37:43.749422  324347 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 12:37:43.749533  324347 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:37:43.824651  324347 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 12:37:43.815460639 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:37:43.824755  324347 docker.go:318] overlay module found
	I1020 12:37:43.827809  324347 out.go:179] * Using the docker driver based on existing profile
	I1020 12:37:43.830539  324347 start.go:305] selected driver: docker
	I1020 12:37:43.830556  324347 start.go:925] validating driver "docker" against &{Name:functional-749689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-749689 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:37:43.830657  324347 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:37:43.834119  324347 out.go:203] 
	W1020 12:37:43.836990  324347 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1020 12:37:43.839829  324347 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-749689 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-749689 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-749689 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (299.166275ms)

                                                
                                                
-- stdout --
	* [functional-749689] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:37:43.406817  324257 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:37:43.406999  324257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:37:43.407014  324257 out.go:374] Setting ErrFile to fd 2...
	I1020 12:37:43.407022  324257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:37:43.408895  324257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:37:43.409476  324257 out.go:368] Setting JSON to false
	I1020 12:37:43.410816  324257 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8414,"bootTime":1760955450,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 12:37:43.410903  324257 start.go:141] virtualization:  
	I1020 12:37:43.415115  324257 out.go:179] * [functional-749689] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1020 12:37:43.418227  324257 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 12:37:43.418432  324257 notify.go:220] Checking for updates...
	I1020 12:37:43.424353  324257 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 12:37:43.427435  324257 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 12:37:43.430353  324257 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 12:37:43.434081  324257 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 12:37:43.437010  324257 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 12:37:43.440557  324257 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:37:43.441140  324257 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 12:37:43.495894  324257 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 12:37:43.496027  324257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:37:43.582711  324257 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 12:37:43.571734559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:37:43.582817  324257 docker.go:318] overlay module found
	I1020 12:37:43.585899  324257 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1020 12:37:43.590067  324257 start.go:305] selected driver: docker
	I1020 12:37:43.590092  324257 start.go:925] validating driver "docker" against &{Name:functional-749689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-749689 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1020 12:37:43.590189  324257 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 12:37:43.593809  324257 out.go:203] 
	W1020 12:37:43.596967  324257 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1020 12:37:43.599860  324257 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [decc9dd8-81b4-4f9e-961d-f2d9ab7a76b6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004408754s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-749689 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-749689 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-749689 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-749689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9992f4fb-1f48-4e3c-af68-018595d1d650] Pending
helpers_test.go:352: "sp-pod" [9992f4fb-1f48-4e3c-af68-018595d1d650] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9992f4fb-1f48-4e3c-af68-018595d1d650] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003471921s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-749689 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-749689 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-749689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fa82b592-a9bf-4219-b4d5-261f7f7612b8] Pending
helpers_test.go:352: "sp-pod" [fa82b592-a9bf-4219-b4d5-261f7f7612b8] Running
E1020 12:27:40.167776  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002997506s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-749689 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh -n functional-749689 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cp functional-749689:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3447331902/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh -n functional-749689 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh -n functional-749689 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/298259/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo cat /etc/test/nested/copy/298259/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/298259.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo cat /etc/ssl/certs/298259.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/298259.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo cat /usr/share/ca-certificates/298259.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2982592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo cat /etc/ssl/certs/2982592.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2982592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo cat /usr/share/ca-certificates/2982592.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-749689 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh "sudo systemctl is-active docker": exit status 1 (362.256031ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh "sudo systemctl is-active containerd": exit status 1 (352.005962ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-749689 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-749689 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-749689 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-749689 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 320778: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-749689 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-749689 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d2e490a1-a1aa-4261-9025-8a90078dc0a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d2e490a1-a1aa-4261-9025-8a90078dc0a3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003223554s
I1020 12:27:26.172061  298259 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-749689 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.234.108 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-749689 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "361.911817ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.24435ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "373.101906ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "66.756812ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdany-port3991449237/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760963851447009544" to /tmp/TestFunctionalparallelMountCmdany-port3991449237/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760963851447009544" to /tmp/TestFunctionalparallelMountCmdany-port3991449237/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760963851447009544" to /tmp/TestFunctionalparallelMountCmdany-port3991449237/001/test-1760963851447009544
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.772225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:37:31.809086  298259 retry.go:31] will retry after 526.399274ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 20 12:37 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 20 12:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 20 12:37 test-1760963851447009544
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh cat /mount-9p/test-1760963851447009544
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-749689 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [dca39710-189e-4f9f-aad4-019206b6e01c] Pending
helpers_test.go:352: "busybox-mount" [dca39710-189e-4f9f-aad4-019206b6e01c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [dca39710-189e-4f9f-aad4-019206b6e01c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [dca39710-189e-4f9f-aad4-019206b6e01c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004160241s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-749689 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdany-port3991449237/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdspecific-port1134604106/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.122578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:37:38.780002  298259 retry.go:31] will retry after 465.790715ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdspecific-port1134604106/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh "sudo umount -f /mount-9p": exit status 1 (289.522234ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-749689 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdspecific-port1134604106/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1087436304/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1087436304/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1087436304/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T" /mount1: exit status 1 (557.519741ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1020 12:37:40.871175  298259 retry.go:31] will retry after 401.862986ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-749689 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1087436304/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1087436304/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-749689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1087436304/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 service list -o json
functional_test.go:1504: Took "603.854728ms" to run "out/minikube-linux-arm64 -p functional-749689 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 version -o=json --components: (1.145693466s)
--- PASS: TestFunctional/parallel/Version/components (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-749689 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-749689 image ls --format short --alsologtostderr:
I1020 12:38:04.666790  326987 out.go:360] Setting OutFile to fd 1 ...
I1020 12:38:04.666990  326987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:04.667004  326987 out.go:374] Setting ErrFile to fd 2...
I1020 12:38:04.667009  326987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:04.667314  326987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
I1020 12:38:04.668397  326987 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:04.668590  326987 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:04.669449  326987 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
I1020 12:38:04.689907  326987 ssh_runner.go:195] Run: systemctl --version
I1020 12:38:04.689965  326987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
I1020 12:38:04.719362  326987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
I1020 12:38:04.827222  326987 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-749689 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-749689 image ls --format table --alsologtostderr:
I1020 12:38:05.246667  327165 out.go:360] Setting OutFile to fd 1 ...
I1020 12:38:05.246784  327165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:05.246791  327165 out.go:374] Setting ErrFile to fd 2...
I1020 12:38:05.246796  327165 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:05.247082  327165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
I1020 12:38:05.247743  327165 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:05.247933  327165 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:05.248582  327165 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
I1020 12:38:05.274748  327165 ssh_runner.go:195] Run: systemctl --version
I1020 12:38:05.274812  327165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
I1020 12:38:05.301502  327165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
I1020 12:38:05.423103  327165 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-749689 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["regis
try.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"
1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209
fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/stor
age-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@s
ha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-749689 image ls --format json --alsologtostderr:
I1020 12:38:04.959678  327068 out.go:360] Setting OutFile to fd 1 ...
I1020 12:38:04.960837  327068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:04.960861  327068 out.go:374] Setting ErrFile to fd 2...
I1020 12:38:04.960867  327068 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:04.961139  327068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
I1020 12:38:04.961772  327068 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:04.962308  327068 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:04.963362  327068 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
I1020 12:38:04.983661  327068 ssh_runner.go:195] Run: systemctl --version
I1020 12:38:04.983718  327068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
I1020 12:38:05.022873  327068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
I1020 12:38:05.135958  327068 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-749689 image ls --format yaml --alsologtostderr:
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-749689 image ls --format yaml --alsologtostderr:
I1020 12:38:04.678672  326988 out.go:360] Setting OutFile to fd 1 ...
I1020 12:38:04.679212  326988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:04.679250  326988 out.go:374] Setting ErrFile to fd 2...
I1020 12:38:04.679270  326988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:04.679564  326988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
I1020 12:38:04.680204  326988 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:04.680394  326988 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:04.680927  326988 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
I1020 12:38:04.703780  326988 ssh_runner.go:195] Run: systemctl --version
I1020 12:38:04.703828  326988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
I1020 12:38:04.723960  326988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
I1020 12:38:04.835144  326988 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-749689 ssh pgrep buildkitd: exit status 1 (369.031191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image build -t localhost/my-image:functional-749689 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-749689 image build -t localhost/my-image:functional-749689 testdata/build --alsologtostderr: (3.34240145s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-749689 image build -t localhost/my-image:functional-749689 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> bd70eb6d18d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-749689
--> 44ae4e4e98e
Successfully tagged localhost/my-image:functional-749689
44ae4e4e98ee04ab24598957a710c0da75197c7e5bf6547630eff5a415c48518
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-749689 image build -t localhost/my-image:functional-749689 testdata/build --alsologtostderr:
I1020 12:38:05.311541  327170 out.go:360] Setting OutFile to fd 1 ...
I1020 12:38:05.312437  327170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:05.312473  327170 out.go:374] Setting ErrFile to fd 2...
I1020 12:38:05.312492  327170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1020 12:38:05.312785  327170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
I1020 12:38:05.313441  327170 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:05.314210  327170 config.go:182] Loaded profile config "functional-749689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1020 12:38:05.314938  327170 cli_runner.go:164] Run: docker container inspect functional-749689 --format={{.State.Status}}
I1020 12:38:05.341498  327170 ssh_runner.go:195] Run: systemctl --version
I1020 12:38:05.341583  327170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-749689
I1020 12:38:05.361261  327170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/functional-749689/id_rsa Username:docker}
I1020 12:38:05.474670  327170 build_images.go:161] Building image from path: /tmp/build.1226287821.tar
I1020 12:38:05.474745  327170 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1020 12:38:05.482819  327170 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1226287821.tar
I1020 12:38:05.486542  327170 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1226287821.tar: stat -c "%s %y" /var/lib/minikube/build/build.1226287821.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1226287821.tar': No such file or directory
I1020 12:38:05.486571  327170 ssh_runner.go:362] scp /tmp/build.1226287821.tar --> /var/lib/minikube/build/build.1226287821.tar (3072 bytes)
I1020 12:38:05.505815  327170 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1226287821
I1020 12:38:05.513848  327170 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1226287821 -xf /var/lib/minikube/build/build.1226287821.tar
I1020 12:38:05.522291  327170 crio.go:315] Building image: /var/lib/minikube/build/build.1226287821
I1020 12:38:05.522394  327170 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-749689 /var/lib/minikube/build/build.1226287821 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1020 12:38:08.549547  327170 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-749689 /var/lib/minikube/build/build.1226287821 --cgroup-manager=cgroupfs: (3.027116834s)
I1020 12:38:08.549617  327170 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1226287821
I1020 12:38:08.558421  327170 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1226287821.tar
I1020 12:38:08.566344  327170 build_images.go:217] Built localhost/my-image:functional-749689 from /tmp/build.1226287821.tar
I1020 12:38:08.566377  327170 build_images.go:133] succeeded building to: functional-749689
I1020 12:38:08.566384  327170 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-749689
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image rm kicbase/echo-server:functional-749689 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-749689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-749689
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-749689
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-749689
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1020 12:39:56.300567  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:41:19.371362  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m24.251564315s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (205.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (37.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 kubectl -- rollout status deployment/busybox: (34.427443127s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-2mlj6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-r9gbh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-tfscg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-2mlj6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-r9gbh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-tfscg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-2mlj6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-r9gbh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-tfscg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (37.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-2mlj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-2mlj6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-r9gbh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-r9gbh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-tfscg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 kubectl -- exec busybox-7b57f96db7-tfscg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 node add --alsologtostderr -v 5
E1020 12:42:16.728685  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:16.735066  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:16.746498  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:16.767860  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:16.809326  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:16.893681  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:17.055454  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:17.376862  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:18.018602  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:19.300131  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:21.862387  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:26.984746  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:37.226170  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:42:57.707737  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 node add --alsologtostderr -v 5: (1m0.066065079s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5: (1.064046012s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-805676 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.095099422s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 status --output json --alsologtostderr -v 5: (1.007802011s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp testdata/cp-test.txt ha-805676:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302304311/001/cp-test_ha-805676.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676:/home/docker/cp-test.txt ha-805676-m02:/home/docker/cp-test_ha-805676_ha-805676-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test_ha-805676_ha-805676-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676:/home/docker/cp-test.txt ha-805676-m03:/home/docker/cp-test_ha-805676_ha-805676-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test_ha-805676_ha-805676-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676:/home/docker/cp-test.txt ha-805676-m04:/home/docker/cp-test_ha-805676_ha-805676-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test_ha-805676_ha-805676-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp testdata/cp-test.txt ha-805676-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302304311/001/cp-test_ha-805676-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m02:/home/docker/cp-test.txt ha-805676:/home/docker/cp-test_ha-805676-m02_ha-805676.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test_ha-805676-m02_ha-805676.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m02:/home/docker/cp-test.txt ha-805676-m03:/home/docker/cp-test_ha-805676-m02_ha-805676-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test_ha-805676-m02_ha-805676-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m02:/home/docker/cp-test.txt ha-805676-m04:/home/docker/cp-test_ha-805676-m02_ha-805676-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test_ha-805676-m02_ha-805676-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp testdata/cp-test.txt ha-805676-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302304311/001/cp-test_ha-805676-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m03:/home/docker/cp-test.txt ha-805676:/home/docker/cp-test_ha-805676-m03_ha-805676.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test_ha-805676-m03_ha-805676.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m03:/home/docker/cp-test.txt ha-805676-m02:/home/docker/cp-test_ha-805676-m03_ha-805676-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test_ha-805676-m03_ha-805676-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m03:/home/docker/cp-test.txt ha-805676-m04:/home/docker/cp-test_ha-805676-m03_ha-805676-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test_ha-805676-m03_ha-805676-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp testdata/cp-test.txt ha-805676-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2302304311/001/cp-test_ha-805676-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m04:/home/docker/cp-test.txt ha-805676:/home/docker/cp-test_ha-805676-m04_ha-805676.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676 "sudo cat /home/docker/cp-test_ha-805676-m04_ha-805676.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m04:/home/docker/cp-test.txt ha-805676-m02:/home/docker/cp-test_ha-805676-m04_ha-805676-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m02 "sudo cat /home/docker/cp-test_ha-805676-m04_ha-805676-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 cp ha-805676-m04:/home/docker/cp-test.txt ha-805676-m03:/home/docker/cp-test_ha-805676-m04_ha-805676-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 ssh -n ha-805676-m03 "sudo cat /home/docker/cp-test_ha-805676-m04_ha-805676-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 node stop m02 --alsologtostderr -v 5
E1020 12:43:38.669356  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 node stop m02 --alsologtostderr -v 5: (12.156086276s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5: exit status 7 (812.300535ms)

                                                
                                                
-- stdout --
	ha-805676
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-805676-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-805676-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-805676-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:43:50.180129  342029 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:43:50.180298  342029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:50.180313  342029 out.go:374] Setting ErrFile to fd 2...
	I1020 12:43:50.180319  342029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:43:50.180639  342029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:43:50.180837  342029 out.go:368] Setting JSON to false
	I1020 12:43:50.180869  342029 mustload.go:65] Loading cluster: ha-805676
	I1020 12:43:50.180970  342029 notify.go:220] Checking for updates...
	I1020 12:43:50.181286  342029 config.go:182] Loaded profile config "ha-805676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:43:50.181306  342029 status.go:174] checking status of ha-805676 ...
	I1020 12:43:50.182245  342029 cli_runner.go:164] Run: docker container inspect ha-805676 --format={{.State.Status}}
	I1020 12:43:50.202903  342029 status.go:371] ha-805676 host status = "Running" (err=<nil>)
	I1020 12:43:50.202930  342029 host.go:66] Checking if "ha-805676" exists ...
	I1020 12:43:50.203231  342029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-805676
	I1020 12:43:50.233590  342029 host.go:66] Checking if "ha-805676" exists ...
	I1020 12:43:50.234033  342029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:43:50.234087  342029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-805676
	I1020 12:43:50.260529  342029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/ha-805676/id_rsa Username:docker}
	I1020 12:43:50.366409  342029 ssh_runner.go:195] Run: systemctl --version
	I1020 12:43:50.372826  342029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:50.392282  342029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 12:43:50.468765  342029 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-20 12:43:50.457886096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 12:43:50.469425  342029 kubeconfig.go:125] found "ha-805676" server: "https://192.168.49.254:8443"
	I1020 12:43:50.469487  342029 api_server.go:166] Checking apiserver status ...
	I1020 12:43:50.469539  342029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:43:50.484286  342029 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	I1020 12:43:50.493965  342029 api_server.go:182] apiserver freezer: "11:freezer:/docker/f7d262697b255fdd0172944c22e15a86fb563a73e8d6dc606ee5e71323330eaa/crio/crio-bc321c5f5013d72c30f7c7fad62aef2588128fae70e0e3c5a9ef22f13e579852"
	I1020 12:43:50.494032  342029 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f7d262697b255fdd0172944c22e15a86fb563a73e8d6dc606ee5e71323330eaa/crio/crio-bc321c5f5013d72c30f7c7fad62aef2588128fae70e0e3c5a9ef22f13e579852/freezer.state
	I1020 12:43:50.502527  342029 api_server.go:204] freezer state: "THAWED"
	I1020 12:43:50.502554  342029 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1020 12:43:50.519089  342029 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1020 12:43:50.519126  342029 status.go:463] ha-805676 apiserver status = Running (err=<nil>)
	I1020 12:43:50.519173  342029 status.go:176] ha-805676 status: &{Name:ha-805676 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:43:50.519198  342029 status.go:174] checking status of ha-805676-m02 ...
	I1020 12:43:50.519524  342029 cli_runner.go:164] Run: docker container inspect ha-805676-m02 --format={{.State.Status}}
	I1020 12:43:50.538404  342029 status.go:371] ha-805676-m02 host status = "Stopped" (err=<nil>)
	I1020 12:43:50.538431  342029 status.go:384] host is not running, skipping remaining checks
	I1020 12:43:50.538438  342029 status.go:176] ha-805676-m02 status: &{Name:ha-805676-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:43:50.538458  342029 status.go:174] checking status of ha-805676-m03 ...
	I1020 12:43:50.538843  342029 cli_runner.go:164] Run: docker container inspect ha-805676-m03 --format={{.State.Status}}
	I1020 12:43:50.557491  342029 status.go:371] ha-805676-m03 host status = "Running" (err=<nil>)
	I1020 12:43:50.557515  342029 host.go:66] Checking if "ha-805676-m03" exists ...
	I1020 12:43:50.557824  342029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-805676-m03
	I1020 12:43:50.574569  342029 host.go:66] Checking if "ha-805676-m03" exists ...
	I1020 12:43:50.574882  342029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:43:50.574927  342029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-805676-m03
	I1020 12:43:50.595063  342029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/ha-805676-m03/id_rsa Username:docker}
	I1020 12:43:50.706334  342029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:50.719747  342029 kubeconfig.go:125] found "ha-805676" server: "https://192.168.49.254:8443"
	I1020 12:43:50.719776  342029 api_server.go:166] Checking apiserver status ...
	I1020 12:43:50.719831  342029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 12:43:50.730941  342029 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1171/cgroup
	I1020 12:43:50.739490  342029 api_server.go:182] apiserver freezer: "11:freezer:/docker/ebd8fdc89b238403547a03062826060b19f377754376db2c8fc4034bc73ab176/crio/crio-bddf57e50ea4ba3a94ca7382b2efc4a317e37981cc7e0852860be2116ba4495b"
	I1020 12:43:50.739559  342029 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ebd8fdc89b238403547a03062826060b19f377754376db2c8fc4034bc73ab176/crio/crio-bddf57e50ea4ba3a94ca7382b2efc4a317e37981cc7e0852860be2116ba4495b/freezer.state
	I1020 12:43:50.747648  342029 api_server.go:204] freezer state: "THAWED"
	I1020 12:43:50.747685  342029 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1020 12:43:50.755826  342029 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1020 12:43:50.755853  342029 status.go:463] ha-805676-m03 apiserver status = Running (err=<nil>)
	I1020 12:43:50.755863  342029 status.go:176] ha-805676-m03 status: &{Name:ha-805676-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:43:50.755900  342029 status.go:174] checking status of ha-805676-m04 ...
	I1020 12:43:50.756203  342029 cli_runner.go:164] Run: docker container inspect ha-805676-m04 --format={{.State.Status}}
	I1020 12:43:50.774307  342029 status.go:371] ha-805676-m04 host status = "Running" (err=<nil>)
	I1020 12:43:50.774336  342029 host.go:66] Checking if "ha-805676-m04" exists ...
	I1020 12:43:50.774694  342029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-805676-m04
	I1020 12:43:50.792353  342029 host.go:66] Checking if "ha-805676-m04" exists ...
	I1020 12:43:50.792794  342029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 12:43:50.792834  342029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-805676-m04
	I1020 12:43:50.810493  342029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/ha-805676-m04/id_rsa Username:docker}
	I1020 12:43:50.913801  342029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 12:43:50.927143  342029 status.go:176] ha-805676-m04 status: &{Name:ha-805676-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 node start m02 --alsologtostderr -v 5: (26.722626425s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5: (1.214661275s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.286447563s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 stop --alsologtostderr -v 5: (27.374771662s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 start --wait true --alsologtostderr -v 5
E1020 12:44:56.299765  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:45:00.591754  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 start --wait true --alsologtostderr -v 5: (1m34.923376266s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 node delete m03 --alsologtostderr -v 5: (10.636969532s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 stop --alsologtostderr -v 5: (36.349218781s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5: exit status 7 (119.253845ms)

                                                
                                                
-- stdout --
	ha-805676
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-805676-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-805676-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 12:47:12.436809  353841 out.go:360] Setting OutFile to fd 1 ...
	I1020 12:47:12.437020  353841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:47:12.437048  353841 out.go:374] Setting ErrFile to fd 2...
	I1020 12:47:12.437066  353841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 12:47:12.437360  353841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 12:47:12.437590  353841 out.go:368] Setting JSON to false
	I1020 12:47:12.437654  353841 mustload.go:65] Loading cluster: ha-805676
	I1020 12:47:12.437740  353841 notify.go:220] Checking for updates...
	I1020 12:47:12.438140  353841 config.go:182] Loaded profile config "ha-805676": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 12:47:12.438178  353841 status.go:174] checking status of ha-805676 ...
	I1020 12:47:12.439265  353841 cli_runner.go:164] Run: docker container inspect ha-805676 --format={{.State.Status}}
	I1020 12:47:12.460882  353841 status.go:371] ha-805676 host status = "Stopped" (err=<nil>)
	I1020 12:47:12.460912  353841 status.go:384] host is not running, skipping remaining checks
	I1020 12:47:12.460919  353841 status.go:176] ha-805676 status: &{Name:ha-805676 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:47:12.460948  353841 status.go:174] checking status of ha-805676-m02 ...
	I1020 12:47:12.461257  353841 cli_runner.go:164] Run: docker container inspect ha-805676-m02 --format={{.State.Status}}
	I1020 12:47:12.484452  353841 status.go:371] ha-805676-m02 host status = "Stopped" (err=<nil>)
	I1020 12:47:12.484474  353841 status.go:384] host is not running, skipping remaining checks
	I1020 12:47:12.484480  353841 status.go:176] ha-805676-m02 status: &{Name:ha-805676-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 12:47:12.484500  353841 status.go:174] checking status of ha-805676-m04 ...
	I1020 12:47:12.484891  353841 cli_runner.go:164] Run: docker container inspect ha-805676-m04 --format={{.State.Status}}
	I1020 12:47:12.503472  353841 status.go:371] ha-805676-m04 host status = "Stopped" (err=<nil>)
	I1020 12:47:12.503492  353841 status.go:384] host is not running, skipping remaining checks
	I1020 12:47:12.503498  353841 status.go:176] ha-805676-m04 status: &{Name:ha-805676-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (166.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1020 12:47:16.729001  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:47:44.440509  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:49:56.300189  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m45.345996822s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (166.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (85.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 node add --control-plane --alsologtostderr -v 5: (1m24.052565822s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-805676 status --alsologtostderr -v 5: (1.037217774s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (85.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.090594878s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-765746 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1020 12:52:16.728678  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-765746 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.318554973s)
--- PASS: TestJSONOutput/start/Command (80.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-765746 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-765746 --output=json --user=testUser: (5.878030699s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-448512 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-448512 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.120681ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d58880f1-a0bc-4a4a-8518-89794e23c978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-448512] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"322b1398-00cf-4944-b1ff-c0879a7d3a3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21773"}}
	{"specversion":"1.0","id":"5a14274e-e08f-4182-96e0-f2c4c204ac67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"87b88966-3939-4125-8ace-c29c9ef91ef5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig"}}
	{"specversion":"1.0","id":"a0aca6d0-d5bc-4d24-ae67-b104842b3fbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube"}}
	{"specversion":"1.0","id":"78269a98-2b50-4522-9b2c-04cc86ea1caf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c46992f6-445d-4ffe-8c89-8967fbf046d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4e3419e3-1d3a-4f93-80e4-2cbd3f718a9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-448512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-448512
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-477974 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-477974 --network=: (39.132917139s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-477974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-477974
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-477974: (2.262402368s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.42s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-657421 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-657421 --network=bridge: (37.402008555s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-657421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-657421
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-657421: (2.055025518s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.47s)

                                                
                                    
x
+
TestKicExistingNetwork (35.75s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1020 12:54:31.117251  298259 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1020 12:54:31.134160  298259 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1020 12:54:31.134828  298259 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1020 12:54:31.134848  298259 cli_runner.go:164] Run: docker network inspect existing-network
W1020 12:54:31.151738  298259 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1020 12:54:31.151774  298259 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1020 12:54:31.151797  298259 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1020 12:54:31.151907  298259 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1020 12:54:31.170179  298259 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-31214b196961 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3e:99:57:10:1b:40} reservation:<nil>}
I1020 12:54:31.176133  298259 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1020 12:54:31.176551  298259 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c79650}
I1020 12:54:31.177116  298259 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1020 12:54:31.177202  298259 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1020 12:54:31.242349  298259 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-371893 --network=existing-network
E1020 12:54:56.304497  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-371893 --network=existing-network: (33.472666186s)
helpers_test.go:175: Cleaning up "existing-network-371893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-371893
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-371893: (2.107088708s)
I1020 12:55:06.838434  298259 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.75s)

                                                
                                    
x
+
TestKicCustomSubnet (34.51s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-141352 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-141352 --subnet=192.168.60.0/24: (32.326316804s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-141352 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-141352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-141352
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-141352: (2.148863511s)
--- PASS: TestKicCustomSubnet (34.51s)

                                                
                                    
x
+
TestKicStaticIP (37.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-061917 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-061917 --static-ip=192.168.200.200: (35.333124706s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-061917 ip
helpers_test.go:175: Cleaning up "static-ip-061917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-061917
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-061917: (2.210925701s)
--- PASS: TestKicStaticIP (37.69s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.63s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-985594 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-985594 --driver=docker  --container-runtime=crio: (35.57375723s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-988023 --driver=docker  --container-runtime=crio
E1020 12:57:16.728871  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-988023 --driver=docker  --container-runtime=crio: (31.356160381s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-985594
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-988023
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-988023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-988023
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-988023: (2.13108282s)
helpers_test.go:175: Cleaning up "first-985594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-985594
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-985594: (2.106335273s)
--- PASS: TestMinikubeProfile (72.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-967542 --memory=3072 --mount-string /tmp/TestMountStartserial634408071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-967542 --memory=3072 --mount-string /tmp/TestMountStartserial634408071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.019514468s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-967542 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-969683 --memory=3072 --mount-string /tmp/TestMountStartserial634408071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-969683 --memory=3072 --mount-string /tmp/TestMountStartserial634408071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.291820418s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-969683 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-967542 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-967542 --alsologtostderr -v=5: (1.732415711s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-969683 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-969683
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-969683: (1.285833146s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-969683
E1020 12:57:59.372738  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-969683: (6.840234536s)
--- PASS: TestMountStart/serial/RestartStopped (7.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-969683 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-347103 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1020 12:58:39.802812  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 12:59:56.299683  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-347103 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.698693544s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-347103 -- rollout status deployment/busybox: (3.52197928s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-7q852 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-rqq5s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-7q852 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-rqq5s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-7q852 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-rqq5s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-7q852 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-7q852 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-rqq5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-347103 -- exec busybox-7b57f96db7-rqq5s -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-347103 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-347103 -v=5 --alsologtostderr: (58.725357105s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-347103 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp testdata/cp-test.txt multinode-347103:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1216353892/001/cp-test_multinode-347103.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103:/home/docker/cp-test.txt multinode-347103-m02:/home/docker/cp-test_multinode-347103_multinode-347103-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m02 "sudo cat /home/docker/cp-test_multinode-347103_multinode-347103-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103:/home/docker/cp-test.txt multinode-347103-m03:/home/docker/cp-test_multinode-347103_multinode-347103-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m03 "sudo cat /home/docker/cp-test_multinode-347103_multinode-347103-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp testdata/cp-test.txt multinode-347103-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1216353892/001/cp-test_multinode-347103-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103-m02:/home/docker/cp-test.txt multinode-347103:/home/docker/cp-test_multinode-347103-m02_multinode-347103.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103 "sudo cat /home/docker/cp-test_multinode-347103-m02_multinode-347103.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103-m02:/home/docker/cp-test.txt multinode-347103-m03:/home/docker/cp-test_multinode-347103-m02_multinode-347103-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m03 "sudo cat /home/docker/cp-test_multinode-347103-m02_multinode-347103-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp testdata/cp-test.txt multinode-347103-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1216353892/001/cp-test_multinode-347103-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103-m03:/home/docker/cp-test.txt multinode-347103:/home/docker/cp-test_multinode-347103-m03_multinode-347103.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103 "sudo cat /home/docker/cp-test_multinode-347103-m03_multinode-347103.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 cp multinode-347103-m03:/home/docker/cp-test.txt multinode-347103-m02:/home/docker/cp-test_multinode-347103-m03_multinode-347103-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 ssh -n multinode-347103-m02 "sudo cat /home/docker/cp-test_multinode-347103-m03_multinode-347103-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-347103 node stop m03: (1.306911087s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-347103 status: exit status 7 (635.89778ms)

                                                
                                                
-- stdout --
	multinode-347103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-347103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-347103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr: exit status 7 (542.378898ms)

                                                
                                                
-- stdout --
	multinode-347103
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-347103-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-347103-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:01:42.376954  404338 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:01:42.377114  404338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:01:42.377144  404338 out.go:374] Setting ErrFile to fd 2...
	I1020 13:01:42.377164  404338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:01:42.377514  404338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:01:42.377752  404338 out.go:368] Setting JSON to false
	I1020 13:01:42.377816  404338 mustload.go:65] Loading cluster: multinode-347103
	I1020 13:01:42.377875  404338 notify.go:220] Checking for updates...
	I1020 13:01:42.378256  404338 config.go:182] Loaded profile config "multinode-347103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:01:42.378287  404338 status.go:174] checking status of multinode-347103 ...
	I1020 13:01:42.379228  404338 cli_runner.go:164] Run: docker container inspect multinode-347103 --format={{.State.Status}}
	I1020 13:01:42.405460  404338 status.go:371] multinode-347103 host status = "Running" (err=<nil>)
	I1020 13:01:42.405482  404338 host.go:66] Checking if "multinode-347103" exists ...
	I1020 13:01:42.405787  404338 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-347103
	I1020 13:01:42.426294  404338 host.go:66] Checking if "multinode-347103" exists ...
	I1020 13:01:42.426590  404338 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:01:42.426635  404338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-347103
	I1020 13:01:42.446156  404338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/multinode-347103/id_rsa Username:docker}
	I1020 13:01:42.550184  404338 ssh_runner.go:195] Run: systemctl --version
	I1020 13:01:42.557074  404338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:01:42.570287  404338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:01:42.635507  404338 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-20 13:01:42.625840616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:01:42.636050  404338 kubeconfig.go:125] found "multinode-347103" server: "https://192.168.58.2:8443"
	I1020 13:01:42.636102  404338 api_server.go:166] Checking apiserver status ...
	I1020 13:01:42.636148  404338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1020 13:01:42.648832  404338 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1231/cgroup
	I1020 13:01:42.657781  404338 api_server.go:182] apiserver freezer: "11:freezer:/docker/7b5953738c78c7cd5a69d558e86653574d72fa1b6c940ea466ff8435a406d6c5/crio/crio-c9885560d03b6e34aba2815b0938645faa35db34bb8a1c9f5e56198751e6f2a8"
	I1020 13:01:42.657853  404338 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7b5953738c78c7cd5a69d558e86653574d72fa1b6c940ea466ff8435a406d6c5/crio/crio-c9885560d03b6e34aba2815b0938645faa35db34bb8a1c9f5e56198751e6f2a8/freezer.state
	I1020 13:01:42.665310  404338 api_server.go:204] freezer state: "THAWED"
	I1020 13:01:42.665338  404338 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1020 13:01:42.674359  404338 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1020 13:01:42.674420  404338 status.go:463] multinode-347103 apiserver status = Running (err=<nil>)
	I1020 13:01:42.674439  404338 status.go:176] multinode-347103 status: &{Name:multinode-347103 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 13:01:42.674475  404338 status.go:174] checking status of multinode-347103-m02 ...
	I1020 13:01:42.674802  404338 cli_runner.go:164] Run: docker container inspect multinode-347103-m02 --format={{.State.Status}}
	I1020 13:01:42.691616  404338 status.go:371] multinode-347103-m02 host status = "Running" (err=<nil>)
	I1020 13:01:42.691641  404338 host.go:66] Checking if "multinode-347103-m02" exists ...
	I1020 13:01:42.691957  404338 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-347103-m02
	I1020 13:01:42.709741  404338 host.go:66] Checking if "multinode-347103-m02" exists ...
	I1020 13:01:42.710064  404338 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1020 13:01:42.710111  404338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-347103-m02
	I1020 13:01:42.726908  404338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21773-296391/.minikube/machines/multinode-347103-m02/id_rsa Username:docker}
	I1020 13:01:42.833748  404338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1020 13:01:42.846506  404338 status.go:176] multinode-347103-m02 status: &{Name:multinode-347103-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1020 13:01:42.846541  404338 status.go:174] checking status of multinode-347103-m03 ...
	I1020 13:01:42.846876  404338 cli_runner.go:164] Run: docker container inspect multinode-347103-m03 --format={{.State.Status}}
	I1020 13:01:42.863822  404338 status.go:371] multinode-347103-m03 host status = "Stopped" (err=<nil>)
	I1020 13:01:42.863848  404338 status.go:384] host is not running, skipping remaining checks
	I1020 13:01:42.863855  404338 status.go:176] multinode-347103-m03 status: &{Name:multinode-347103-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-347103 node start m03 -v=5 --alsologtostderr: (8.03053106s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (75.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-347103
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-347103
E1020 13:02:16.728559  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-347103: (25.080033843s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-347103 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-347103 --wait=true -v=5 --alsologtostderr: (50.55373043s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-347103
--- PASS: TestMultiNode/serial/RestartKeepsNodes (75.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-347103 node delete m03: (4.844401984s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-347103 stop: (23.81858087s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-347103 status: exit status 7 (95.472193ms)

                                                
                                                
-- stdout --
	multinode-347103
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-347103-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr: exit status 7 (104.198647ms)

                                                
                                                
-- stdout --
	multinode-347103
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-347103-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:03:36.989593  412038 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:03:36.989715  412038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:03:36.989725  412038 out.go:374] Setting ErrFile to fd 2...
	I1020 13:03:36.989731  412038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:03:36.990114  412038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:03:36.990583  412038 out.go:368] Setting JSON to false
	I1020 13:03:36.990776  412038 mustload.go:65] Loading cluster: multinode-347103
	I1020 13:03:36.991176  412038 notify.go:220] Checking for updates...
	I1020 13:03:36.991190  412038 config.go:182] Loaded profile config "multinode-347103": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:03:36.991327  412038 status.go:174] checking status of multinode-347103 ...
	I1020 13:03:36.991899  412038 cli_runner.go:164] Run: docker container inspect multinode-347103 --format={{.State.Status}}
	I1020 13:03:37.011235  412038 status.go:371] multinode-347103 host status = "Stopped" (err=<nil>)
	I1020 13:03:37.011261  412038 status.go:384] host is not running, skipping remaining checks
	I1020 13:03:37.011269  412038 status.go:176] multinode-347103 status: &{Name:multinode-347103 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1020 13:03:37.011305  412038 status.go:174] checking status of multinode-347103-m02 ...
	I1020 13:03:37.011691  412038 cli_runner.go:164] Run: docker container inspect multinode-347103-m02 --format={{.State.Status}}
	I1020 13:03:37.043644  412038 status.go:371] multinode-347103-m02 host status = "Stopped" (err=<nil>)
	I1020 13:03:37.043671  412038 status.go:384] host is not running, skipping remaining checks
	I1020 13:03:37.043677  412038 status.go:176] multinode-347103-m02 status: &{Name:multinode-347103-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-347103 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-347103 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.321200753s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-347103 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-347103
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-347103-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-347103-m02 --driver=docker  --container-runtime=crio: exit status 14 (100.947733ms)

                                                
                                                
-- stdout --
	* [multinode-347103-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-347103-m02' is duplicated with machine name 'multinode-347103-m02' in profile 'multinode-347103'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-347103-m03 --driver=docker  --container-runtime=crio
E1020 13:04:56.307039  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-347103-m03 --driver=docker  --container-runtime=crio: (30.339767513s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-347103
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-347103: exit status 80 (340.480904ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-347103 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-347103-m03 already exists in multinode-347103-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-347103-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-347103-m03: (2.381780916s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.22s)

                                                
                                    
x
+
TestPreload (126.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-317687 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-317687 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m0.765506856s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-317687 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-317687 image pull gcr.io/k8s-minikube/busybox: (2.21578145s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-317687
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-317687: (5.984878427s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-317687 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-317687 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.780460118s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-317687 image list
helpers_test.go:175: Cleaning up "test-preload-317687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-317687
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-317687: (2.510731753s)
--- PASS: TestPreload (126.50s)

                                                
                                    
x
+
TestScheduledStopUnix (109.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-608880 --memory=3072 --driver=docker  --container-runtime=crio
E1020 13:07:16.728525  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-608880 --memory=3072 --driver=docker  --container-runtime=crio: (33.198734655s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-608880 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-608880 -n scheduled-stop-608880
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-608880 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1020 13:07:46.841856  298259 retry.go:31] will retry after 95.524µs: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.843022  298259 retry.go:31] will retry after 210.064µs: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.844234  298259 retry.go:31] will retry after 250.545µs: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.846327  298259 retry.go:31] will retry after 458.314µs: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.847401  298259 retry.go:31] will retry after 287.704µs: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.848517  298259 retry.go:31] will retry after 728.306µs: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.849591  298259 retry.go:31] will retry after 1.357897ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.851781  298259 retry.go:31] will retry after 1.029889ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.852902  298259 retry.go:31] will retry after 2.876447ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.856084  298259 retry.go:31] will retry after 4.283685ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.861352  298259 retry.go:31] will retry after 6.042023ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.867531  298259 retry.go:31] will retry after 8.295958ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.876759  298259 retry.go:31] will retry after 17.340732ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.895950  298259 retry.go:31] will retry after 21.637377ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.918144  298259 retry.go:31] will retry after 17.697167ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
I1020 13:07:46.936459  298259 retry.go:31] will retry after 55.212576ms: open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/scheduled-stop-608880/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-608880 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-608880 -n scheduled-stop-608880
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-608880
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-608880 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-608880
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-608880: exit status 7 (68.235978ms)

                                                
                                                
-- stdout --
	scheduled-stop-608880
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-608880 -n scheduled-stop-608880
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-608880 -n scheduled-stop-608880: exit status 7 (72.35076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-608880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-608880
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-608880: (4.488684707s)
--- PASS: TestScheduledStopUnix (109.33s)

                                                
                                    
x
+
TestInsufficientStorage (13.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-255510 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-255510 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.013010156s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e6fa2eec-5b70-4e7a-8c8a-485f17ed9de2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-255510] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d3f5933-c502-4937-beb4-743c1a455ee0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21773"}}
	{"specversion":"1.0","id":"495133c3-6e94-4538-8852-55eb2ae4ae88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b8d4397-b58a-485e-ad81-d57ead1e6a8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig"}}
	{"specversion":"1.0","id":"7b207983-7030-484d-a2d5-0cc1a4a6a447","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube"}}
	{"specversion":"1.0","id":"ff2cd340-9520-4caa-bea2-f57c6d1b0b4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7b7f0c6d-d8e5-4310-88fc-250bd7a138b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"86ee996c-5436-447a-aee0-13519d5a9216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"015de4fd-bdd7-43bc-9201-e8609ecad09d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2d665dd0-b3de-4722-86c6-9270acbb375f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4010dc01-3a93-491e-be57-32c503e62f1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f2285d95-1572-4327-822b-d1c63273a4e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-255510\" primary control-plane node in \"insufficient-storage-255510\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cab58c40-3a8f-4829-8a15-ef9e7bf3253c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a2cc3293-61d4-4296-90c3-90045c7e383c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6d9b1e0-5f96-4281-99f8-955d05fbab32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-255510 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-255510 --output=json --layout=cluster: exit status 7 (316.359048ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-255510","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-255510","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1020 13:09:13.750885  428139 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-255510" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-255510 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-255510 --output=json --layout=cluster: exit status 7 (318.667848ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-255510","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-255510","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1020 13:09:14.069394  428203 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-255510" does not appear in /home/jenkins/minikube-integration/21773-296391/kubeconfig
	E1020 13:09:14.079975  428203 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/insufficient-storage-255510/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-255510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-255510
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-255510: (2.010173451s)
--- PASS: TestInsufficientStorage (13.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (53.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2912578411 start -p running-upgrade-694683 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2912578411 start -p running-upgrade-694683 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.595130522s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-694683 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-694683 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.610688272s)
helpers_test.go:175: Cleaning up "running-upgrade-694683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-694683
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-694683: (1.991907977s)
--- PASS: TestRunningBinaryUpgrade (53.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (365.99s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.87381922s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-314577
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-314577: (1.468087079s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-314577 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-314577 status --format={{.Host}}: exit status 7 (135.335587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.223142896s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-314577 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (119.005279ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-314577] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-314577
	    minikube start -p kubernetes-upgrade-314577 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3145772 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-314577 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-314577 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.874717646s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-314577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-314577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-314577: (2.18346043s)
--- PASS: TestKubernetesUpgrade (365.99s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.46s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1631940536 start -p missing-upgrade-507750 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1631940536 start -p missing-upgrade-507750 --memory=3072 --driver=docker  --container-runtime=crio: (1m6.822710081s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-507750
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-507750
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-507750 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1020 13:12:16.729193  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-507750 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.184385382s)
helpers_test.go:175: Cleaning up "missing-upgrade-507750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-507750
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-507750: (1.993499541s)
--- PASS: TestMissingContainerUpgrade (116.46s)

                                                
                                    
x
+
TestPause/serial/Start (62.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-255950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-255950 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m2.053250419s)
--- PASS: TestPause/serial/Start (62.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-820821 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-820821 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (136.18288ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-820821] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-820821 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1020 13:09:56.299898  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-820821 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.141202016s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-820821 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.339198933s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-820821 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-820821 status -o json: exit status 2 (321.27666ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-820821","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-820821
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-820821: (2.018712338s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-820821 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.09240209s)
--- PASS: TestNoKubernetes/serial/Start (9.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-820821 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-820821 "sudo systemctl is-active --quiet service kubelet": exit status 1 (303.161913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.97s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-255950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-255950 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.937551212s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-820821
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-820821: (1.414722863s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-820821 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-820821 --driver=docker  --container-runtime=crio: (7.745114914s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-820821 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-820821 "sudo systemctl is-active --quiet service kubelet": exit status 1 (354.324877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (54.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.559867998 start -p stopped-upgrade-997295 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.559867998 start -p stopped-upgrade-997295 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.290548324s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.559867998 -p stopped-upgrade-997295 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.559867998 -p stopped-upgrade-997295 stop: (1.23004655s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-997295 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-997295 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.839806948s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (54.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-997295
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-997295: (1.176866696s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-308474 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-308474 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (205.238383ms)

                                                
                                                
-- stdout --
	* [false-308474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21773
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1020 13:15:03.054651  463093 out.go:360] Setting OutFile to fd 1 ...
	I1020 13:15:03.054789  463093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:15:03.054799  463093 out.go:374] Setting ErrFile to fd 2...
	I1020 13:15:03.054803  463093 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1020 13:15:03.055091  463093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21773-296391/.minikube/bin
	I1020 13:15:03.055490  463093 out.go:368] Setting JSON to false
	I1020 13:15:03.056430  463093 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10653,"bootTime":1760955450,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1020 13:15:03.056506  463093 start.go:141] virtualization:  
	I1020 13:15:03.060112  463093 out.go:179] * [false-308474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1020 13:15:03.063945  463093 out.go:179]   - MINIKUBE_LOCATION=21773
	I1020 13:15:03.064071  463093 notify.go:220] Checking for updates...
	I1020 13:15:03.070171  463093 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1020 13:15:03.073260  463093 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21773-296391/kubeconfig
	I1020 13:15:03.076261  463093 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21773-296391/.minikube
	I1020 13:15:03.079171  463093 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1020 13:15:03.082163  463093 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1020 13:15:03.085653  463093 config.go:182] Loaded profile config "kubernetes-upgrade-314577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1020 13:15:03.085808  463093 driver.go:421] Setting default libvirt URI to qemu:///system
	I1020 13:15:03.120504  463093 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1020 13:15:03.120634  463093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1020 13:15:03.180481  463093 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-20 13:15:03.170336632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1020 13:15:03.180592  463093 docker.go:318] overlay module found
	I1020 13:15:03.183759  463093 out.go:179] * Using the docker driver based on user configuration
	I1020 13:15:03.186542  463093 start.go:305] selected driver: docker
	I1020 13:15:03.186562  463093 start.go:925] validating driver "docker" against <nil>
	I1020 13:15:03.186575  463093 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1020 13:15:03.190342  463093 out.go:203] 
	W1020 13:15:03.193174  463093 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1020 13:15:03.196007  463093 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-308474 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-308474" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 13:11:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-314577
contexts:
- context:
cluster: kubernetes-upgrade-314577
user: kubernetes-upgrade-314577
name: kubernetes-upgrade-314577
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-314577
user:
client-certificate: /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/kubernetes-upgrade-314577/client.crt
client-key: /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/kubernetes-upgrade-314577/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-308474

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-308474"

                                                
                                                
----------------------- debugLogs end: false-308474 [took: 3.311921864s] --------------------------------
helpers_test.go:175: Cleaning up "false-308474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-308474
--- PASS: TestNetworkPlugins/group/false (3.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (85.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m25.958191053s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (85.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-995203 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [78b08b5e-c021-42c9-bc66-5ad8d839afc5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [78b08b5e-c021-42c9-bc66-5ad8d839afc5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003934204s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-995203 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-995203 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-995203 --alsologtostderr -v=3: (12.005673061s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203: exit status 7 (76.886952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-995203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-995203 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.71076956s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-995203 -n old-k8s-version-995203
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-72xxb" [cc3251fa-505c-47ad-94ec-14b28587285f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003262615s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-72xxb" [cc3251fa-505c-47ad-94ec-14b28587285f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004281319s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-995203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-995203 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m18.964730986s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.693561979s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-794175 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [630ece87-4be2-448f-b9d0-4e832072a0c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [630ece87-4be2-448f-b9d0-4e832072a0c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.012135141s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-794175 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-794175 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-794175 --alsologtostderr -v=3: (12.26915987s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175: exit status 7 (83.857044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-794175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1020 13:22:16.729399  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-794175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.615477037s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-794175 -n default-k8s-diff-port-794175
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-979197 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [50db164b-1b33-4592-8bf8-53911486ce65] Pending
helpers_test.go:352: "busybox" [50db164b-1b33-4592-8bf8-53911486ce65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [50db164b-1b33-4592-8bf8-53911486ce65] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003960143s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-979197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-979197 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-979197 --alsologtostderr -v=3: (12.623128243s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197: exit status 7 (67.431496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-979197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-979197 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.733505614s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-979197 -n embed-certs-979197
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (58.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-spstf" [511cde88-9329-460d-9d71-a37a0512555c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002996559s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-spstf" [511cde88-9329-460d-9d71-a37a0512555c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003027699s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-794175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-794175 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m54.603527458s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9zg9f" [5e999fa2-9b91-494f-afd7-ce9f673fe72d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024064439s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9zg9f" [5e999fa2-9b91-494f-afd7-ce9f673fe72d] Running
E1020 13:23:59.784738  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:23:59.791937  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:23:59.803367  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:23:59.825709  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:23:59.867287  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:23:59.948941  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:24:00.111582  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:24:00.448014  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:24:01.089293  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004579743s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-979197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-979197 image list --format=json
E1020 13:24:02.370773  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1020 13:24:20.296403  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:24:40.778607  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.436867912s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-018730 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-018730 --alsologtostderr -v=3: (1.494233573s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730: exit status 7 (213.267672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-018730 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-018730 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (16.715744215s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-018730 -n newest-cni-018730
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-018730 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m23.96474707s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-744804 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [751404bb-a4a7-4344-b48b-077e31d184a4] Pending
helpers_test.go:352: "busybox" [751404bb-a4a7-4344-b48b-077e31d184a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [751404bb-a4a7-4344-b48b-077e31d184a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00359399s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-744804 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-744804 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-744804 --alsologtostderr -v=3: (12.26330145s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804: exit status 7 (131.838851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-744804 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1020 13:26:43.661992  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-744804 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.296233551s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-744804 -n no-preload-744804
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4sq6t" [7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003211384s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-308474 "pgrep -a kubelet"
I1020 13:26:50.803197  298259 config.go:182] Loaded profile config "auto-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-308474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rfg8b" [7d15d58e-a025-4bd1-94cc-45c4752047b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1020 13:26:51.641495  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:51.647829  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:51.659211  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:51.680676  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:51.722039  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:51.803837  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:51.965607  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:52.287620  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rfg8b" [7d15d58e-a025-4bd1-94cc-45c4752047b1] Running
E1020 13:26:56.772830  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003913584s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4sq6t" [7b1e78ff-f6ea-4f7c-82e1-7bc0755ae3c4] Running
E1020 13:26:52.929094  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:26:54.211202  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003347912s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-744804 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-744804 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-308474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1020 13:27:01.894404  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1020 13:27:12.137166  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:27:16.729214  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.465208058s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1020 13:27:32.618622  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:28:13.580122  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.714322812s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-v546c" [a30df193-70d0-4a6c-9ba1-15326c5f9c01] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003959982s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-sxwt9" [c1bdbebe-0275-4697-b249-6bc14a51f1e9] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-sxwt9" [c1bdbebe-0275-4697-b249-6bc14a51f1e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004088216s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-308474 "pgrep -a kubelet"
I1020 13:28:41.140678  298259 config.go:182] Loaded profile config "kindnet-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-308474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4lkjh" [1128b25c-bece-4932-9713-874d223f147d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4lkjh" [1128b25c-bece-4932-9713-874d223f147d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.003901605s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-308474 "pgrep -a kubelet"
I1020 13:28:41.933731  298259 config.go:182] Loaded profile config "calico-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-308474 replace --force -f testdata/netcat-deployment.yaml
I1020 13:28:42.328000  298259 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lkfjw" [3bec8828-a59d-4f85-8d1a-55b38fc62e75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lkfjw" [3bec8828-a59d-4f85-8d1a-55b38fc62e75] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003314488s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-308474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-308474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.553637122s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1020 13:29:27.503345  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/old-k8s-version-995203/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:29:35.501682  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:29:56.300266  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.050080  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.056466  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.067862  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.089227  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.130606  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.212141  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.373769  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:29.695567  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:30.337687  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:30:31.619544  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m22.548040562s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-308474 "pgrep -a kubelet"
I1020 13:30:33.589592  298259 config.go:182] Loaded profile config "custom-flannel-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-308474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-86dbr" [f3633db4-46f0-48b4-8387-23e2f819cbd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1020 13:30:34.181174  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-86dbr" [f3633db4-46f0-48b4-8387-23e2f819cbd2] Running
E1020 13:30:39.302623  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004560811s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-308474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-308474 "pgrep -a kubelet"
I1020 13:30:44.879647  298259 config.go:182] Loaded profile config "enable-default-cni-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-308474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kqnkx" [dd745b41-2bab-45f0-bc3f-e7c54c01d3f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kqnkx" [dd745b41-2bab-45f0-bc3f-e7c54c01d3f9] Running
E1020 13:30:49.545184  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004508303s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-308474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1020 13:31:10.026565  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m13.857693755s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1020 13:31:19.376425  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/addons-399470/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:50.987927  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/no-preload-744804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.066021  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.072760  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.084189  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.105564  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.146909  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.228216  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.390224  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.640908  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:51.712402  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:52.354534  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:53.636026  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:56.197935  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:31:59.805486  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:32:01.319187  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:32:11.561500  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:32:16.729196  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/functional-749689/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1020 13:32:19.343904  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/default-k8s-diff-port-794175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-308474 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m29.973846282s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-cgj2z" [b0fee7fc-4640-46a3-b86e-9242ebfa0f64] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006076661s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-308474 "pgrep -a kubelet"
I1020 13:32:28.029060  298259 config.go:182] Loaded profile config "flannel-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-308474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sl79h" [fdf6e07c-8218-4f1d-b5b8-432c1196394a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1020 13:32:32.042857  298259 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/auto-308474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sl79h" [fdf6e07c-8218-4f1d-b5b8-432c1196394a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003621344s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-308474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-308474 "pgrep -a kubelet"
I1020 13:32:49.559126  298259 config.go:182] Loaded profile config "bridge-308474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-308474 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6k6kz" [9ad1c9ea-67f6-4809-a03f-5888168fbeb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6k6kz" [9ad1c9ea-67f6-4809-a03f-5888168fbeb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004130056s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-308474 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-308474 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-415037 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-415037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-415037
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-972433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-972433
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-308474 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-308474" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 13:11:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-314577
contexts:
- context:
cluster: kubernetes-upgrade-314577
user: kubernetes-upgrade-314577
name: kubernetes-upgrade-314577
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-314577
user:
client-certificate: /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/kubernetes-upgrade-314577/client.crt
client-key: /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/kubernetes-upgrade-314577/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-308474

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-308474"

                                                
                                                
----------------------- debugLogs end: kubenet-308474 [took: 3.971181787s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-308474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-308474
--- SKIP: TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-308474 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-308474" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21773-296391/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 20 Oct 2025 13:11:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-314577
contexts:
- context:
cluster: kubernetes-upgrade-314577
user: kubernetes-upgrade-314577
name: kubernetes-upgrade-314577
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-314577
user:
client-certificate: /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/kubernetes-upgrade-314577/client.crt
client-key: /home/jenkins/minikube-integration/21773-296391/.minikube/profiles/kubernetes-upgrade-314577/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-308474

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-308474" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-308474"

                                                
                                                
----------------------- debugLogs end: cilium-308474 [took: 4.262635986s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-308474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-308474
--- SKIP: TestNetworkPlugins/group/cilium (4.43s)

                                                
                                    
Copied to clipboard